Feb 20 00:10:17 crc systemd[1]: Starting Kubernetes Kubelet... Feb 20 00:10:18 crc kubenswrapper[5119]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 00:10:18 crc kubenswrapper[5119]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 20 00:10:18 crc kubenswrapper[5119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 00:10:18 crc kubenswrapper[5119]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 00:10:18 crc kubenswrapper[5119]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 20 00:10:18 crc kubenswrapper[5119]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.495756 5119 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502623 5119 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502658 5119 feature_gate.go:328] unrecognized feature gate: Example2 Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502667 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502676 5119 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502686 5119 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502695 5119 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502705 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502713 5119 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502721 5119 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502729 5119 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502737 5119 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502745 5119 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502754 5119 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502762 5119 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502770 5119 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502778 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502786 5119 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502795 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502803 5119 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502811 5119 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502821 5119 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502829 5119 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502838 5119 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502846 5119 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502853 5119 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502861 5119 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502869 5119 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502877 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502884 5119 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502892 5119 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502899 5119 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502908 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502917 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502926 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502937 5119 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502950 5119 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502961 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502971 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502979 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502988 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.502996 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503004 5119 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503013 5119 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503021 5119 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503029 5119 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503036 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503044 5119 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503052 5119 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503060 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503067 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503075 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503084 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503092 5119 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503100 5119 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503108 5119 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503116 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503124 5119 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503133 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503142 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503150 5119 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503158 5119 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503166 5119 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503173 5119 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503181 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503189 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503196 5119 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503206 5119 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503214 5119 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503221 5119 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503233 5119 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503241 5119 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503249 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503257 5119 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503265 5119 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503273 5119 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503280 5119 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503288 5119 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503295 5119 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503303 5119 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503311 5119 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503318 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503326 5119 feature_gate.go:328] unrecognized feature gate: Example Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503333 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503341 5119 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503351 5119 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.503361 5119 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504359 5119 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504379 5119 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504388 5119 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504398 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504407 5119 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504416 5119 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504424 5119 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504431 5119 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504439 5119 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504447 5119 feature_gate.go:328] unrecognized feature gate: Example2 Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504454 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504462 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504471 5119 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504478 5119 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504487 5119 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504494 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504502 5119 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504509 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504517 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504525 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504533 5119 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504584 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504593 5119 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504601 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504611 5119 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504619 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504627 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504635 5119 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504643 5119 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504651 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504662 5119 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504671 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504678 5119 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504686 5119 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504694 5119 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504706 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504714 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504722 5119 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504731 5119 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504739 5119 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504748 5119 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504755 5119 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504763 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504772 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504780 5119 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504791 5119 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504798 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504806 5119 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504813 5119 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504821 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504829 5119 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504836 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504844 5119 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504854 5119 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504862 5119 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504869 5119 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504877 5119 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504885 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504892 5119 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504900 5119 feature_gate.go:328] unrecognized feature gate: Example Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504908 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504916 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504923 5119 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504931 5119 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504938 5119 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504946 5119 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504954 5119 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504965 5119 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504974 5119 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504982 5119 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504990 5119 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.504998 5119 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505005 5119 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505012 5119 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505023 5119 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505032 5119 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505040 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505049 5119 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505058 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505066 5119 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505075 5119 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505083 5119 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505091 5119 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505098 5119 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505106 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.505115 5119 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505307 5119 flags.go:64] FLAG: --address="0.0.0.0" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505345 5119 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505364 5119 flags.go:64] FLAG: --anonymous-auth="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505378 5119 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505391 5119 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505402 5119 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505416 5119 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505649 5119 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505667 5119 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505676 5119 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505686 5119 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505696 5119 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505705 5119 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505718 5119 flags.go:64] FLAG: --cgroup-root="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505726 5119 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505735 5119 flags.go:64] FLAG: --client-ca-file="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505743 5119 flags.go:64] FLAG: --cloud-config="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505752 5119 flags.go:64] FLAG: --cloud-provider="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505761 5119 flags.go:64] FLAG: --cluster-dns="[]" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505772 5119 flags.go:64] FLAG: --cluster-domain="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505780 5119 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505789 5119 flags.go:64] FLAG: --config-dir="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505798 5119 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505807 5119 flags.go:64] FLAG: --container-log-max-files="5" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505820 5119 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505833 5119 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505842 5119 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505851 5119 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505860 5119 flags.go:64] FLAG: --contention-profiling="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505868 5119 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505877 5119 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505886 5119 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505896 5119 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505907 5119 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505916 5119 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505924 5119 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505932 5119 flags.go:64] FLAG: --enable-load-reader="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505941 5119 flags.go:64] FLAG: --enable-server="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505950 5119 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505962 5119 flags.go:64] FLAG: --event-burst="100" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505971 5119 flags.go:64] FLAG: --event-qps="50" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505979 5119 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505988 5119 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.505997 5119 flags.go:64] FLAG: --eviction-hard="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506007 5119 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506019 5119 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506028 5119 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506036 5119 flags.go:64] FLAG: --eviction-soft="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506045 5119 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506053 5119 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506062 5119 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506071 5119 flags.go:64] FLAG: --experimental-mounter-path="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506079 5119 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506088 5119 flags.go:64] FLAG: --fail-swap-on="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506096 5119 flags.go:64] FLAG: --feature-gates="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506107 5119 flags.go:64] FLAG: --file-check-frequency="20s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506115 5119 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506124 5119 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506134 5119 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506143 5119 flags.go:64] FLAG: --healthz-port="10248" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506152 5119 flags.go:64] FLAG: --help="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506160 5119 flags.go:64] FLAG: --hostname-override="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506168 5119 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506177 5119 flags.go:64] FLAG: --http-check-frequency="20s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506187 5119 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506195 5119 flags.go:64] FLAG: --image-credential-provider-config="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506204 5119 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506212 5119 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506221 5119 flags.go:64] FLAG: --image-service-endpoint="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506231 5119 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506241 5119 flags.go:64] FLAG: --kube-api-burst="100" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506250 5119 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506259 5119 flags.go:64] FLAG: --kube-api-qps="50" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506268 5119 flags.go:64] FLAG: --kube-reserved="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506278 5119 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506287 5119 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506296 5119 flags.go:64] FLAG: --kubelet-cgroups="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506307 5119 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506315 5119 flags.go:64] FLAG: --lock-file="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506324 5119 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506333 5119 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506341 5119 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506354 5119 flags.go:64] FLAG: --log-json-split-stream="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506363 5119 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506371 5119 flags.go:64] FLAG: --log-text-split-stream="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506380 5119 flags.go:64] FLAG: --logging-format="text" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506389 5119 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506399 5119 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506408 5119 flags.go:64] FLAG: --manifest-url="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506416 5119 flags.go:64] FLAG: --manifest-url-header="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506428 5119 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506438 5119 flags.go:64] FLAG: --max-open-files="1000000" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506449 5119 flags.go:64] FLAG: --max-pods="110" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506458 5119 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506467 5119 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506479 5119 flags.go:64] FLAG: --memory-manager-policy="None" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506488 5119 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506496 5119 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506505 5119 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506514 5119 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506535 5119 flags.go:64] FLAG: --node-status-max-images="50" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506602 5119 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506613 5119 flags.go:64] FLAG: --oom-score-adj="-999" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506622 5119 flags.go:64] FLAG: --pod-cidr="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506631 5119 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506646 5119 flags.go:64] FLAG: --pod-manifest-path="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506655 5119 flags.go:64] FLAG: --pod-max-pids="-1" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506664 5119 flags.go:64] FLAG: --pods-per-core="0" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506672 5119 flags.go:64] FLAG: --port="10250" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506681 5119 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506690 5119 flags.go:64] FLAG: --provider-id="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506699 5119 flags.go:64] FLAG: --qos-reserved="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506708 5119 flags.go:64] FLAG: --read-only-port="10255" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506716 5119 flags.go:64] FLAG: --register-node="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506725 5119 flags.go:64] FLAG: --register-schedulable="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506733 5119 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506748 5119 flags.go:64] FLAG: --registry-burst="10" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506757 5119 flags.go:64] FLAG: --registry-qps="5" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506765 5119 flags.go:64] FLAG: --reserved-cpus="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506774 5119 flags.go:64] FLAG: --reserved-memory="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506783 5119 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506792 5119 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506801 5119 flags.go:64] FLAG: --rotate-certificates="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506809 5119 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506819 5119 flags.go:64] FLAG: --runonce="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506828 5119 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506837 5119 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506845 5119 flags.go:64] FLAG: --seccomp-default="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506858 5119 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506867 5119 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506876 5119 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506884 5119 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506894 5119 flags.go:64] FLAG: --storage-driver-password="root" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506902 5119 flags.go:64] FLAG: --storage-driver-secure="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506912 5119 flags.go:64] FLAG: --storage-driver-table="stats" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506921 5119 flags.go:64] FLAG: --storage-driver-user="root" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506933 5119 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506953 5119 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506973 5119 flags.go:64] FLAG: --system-cgroups="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.506985 5119 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507004 5119 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507016 5119 flags.go:64] FLAG: --tls-cert-file="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507027 5119 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507041 5119 flags.go:64] FLAG: --tls-min-version="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507052 5119 flags.go:64] FLAG: --tls-private-key-file="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507062 5119 flags.go:64] FLAG: --topology-manager-policy="none" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507071 5119 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507081 5119 flags.go:64] FLAG: --topology-manager-scope="container" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507092 5119 flags.go:64] FLAG: --v="2" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507108 5119 flags.go:64] FLAG: --version="false" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507122 5119 flags.go:64] FLAG: --vmodule="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507136 5119 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.507149 5119 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507354 5119 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507366 5119 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507374 5119 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507383 5119 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507392 5119 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507401 5119 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507408 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507425 5119 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507432 5119 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507440 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507448 5119 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507456 5119 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507464 5119 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507471 5119 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507479 5119 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507487 5119 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507495 5119 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507502 5119 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507513 5119 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507524 5119 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507534 5119 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507596 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507615 5119 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507624 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507632 5119 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507642 5119 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507652 5119 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507661 5119 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507669 5119 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507677 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507686 5119 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507695 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507705 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507713 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507721 5119 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507730 5119 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507738 5119 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507749 5119 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507757 5119 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507770 5119 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507778 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507786 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507794 5119 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507802 5119 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507810 5119 feature_gate.go:328] unrecognized feature gate: Example Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507818 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507826 5119 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507834 5119 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507842 5119 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507850 5119 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507859 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507870 5119 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507878 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507886 5119 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507893 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507901 5119 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507908 5119 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507916 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507924 5119 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507932 5119 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507939 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507947 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507955 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507962 5119 feature_gate.go:328] unrecognized feature gate: Example2 Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507970 5119 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507978 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507985 5119 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.507994 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508001 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508009 5119 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508019 5119 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508032 5119 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508040 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508050 5119 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508060 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508068 5119 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508076 5119 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508084 5119 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508092 5119 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508100 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508108 5119 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508116 5119 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508124 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508135 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508143 5119 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.508150 5119 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.509681 5119 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.526034 5119 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.526108 5119 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526170 5119 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526181 5119 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526186 5119 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526192 5119 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526196 5119 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526201 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526206 5119 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526211 5119 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526215 5119 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526219 5119 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526225 5119 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526235 5119 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526240 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526245 5119 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526250 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526255 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526260 5119 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526266 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526272 5119 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526278 5119 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526283 5119 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526289 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526295 5119 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526301 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526307 5119 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526313 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526318 5119 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526322 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526327 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526331 5119 feature_gate.go:328] unrecognized feature gate: Example2 Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526336 5119 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526350 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526355 5119 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526360 5119 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526365 5119 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526370 5119 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526374 5119 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526379 5119 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526384 5119 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526388 5119 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526392 5119 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526397 5119 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526403 5119 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526409 5119 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526414 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526419 5119 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526424 5119 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526428 5119 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526433 5119 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526437 5119 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526442 5119 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526447 5119 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526452 5119 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526456 5119 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526461 5119 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526465 5119 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526470 5119 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526474 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526479 5119 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526483 5119 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526487 5119 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526491 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526495 5119 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526499 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526511 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526515 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526519 5119 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526524 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526528 5119 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526532 5119 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526536 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526561 5119 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526565 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526569 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526573 5119 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526577 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526581 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526585 5119 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526589 5119 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526593 5119 feature_gate.go:328] unrecognized feature gate: Example Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526597 5119 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526601 5119 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526608 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526613 5119 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526618 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526622 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.526630 5119 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526809 5119 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526818 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526823 5119 feature_gate.go:328] unrecognized feature gate: Example2 Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526828 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526832 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526836 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526840 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526845 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526849 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526853 5119 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526865 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526870 5119 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526876 5119 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526881 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526885 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526889 5119 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526894 5119 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526898 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526903 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526907 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526911 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526916 5119 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526920 5119 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526924 5119 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526928 5119 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526932 5119 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526936 5119 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526940 5119 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526946 5119 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526951 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526956 5119 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526960 5119 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526965 5119 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526969 5119 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526973 5119 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526977 5119 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526981 5119 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526985 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526989 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526993 5119 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.526998 5119 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527002 5119 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527006 5119 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527019 5119 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527024 5119 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527028 5119 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527032 5119 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527036 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527040 5119 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527044 5119 feature_gate.go:328] unrecognized feature gate: Example Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527048 5119 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527052 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527056 5119 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527060 5119 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527064 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527068 5119 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527072 5119 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527077 5119 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527082 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527086 5119 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527093 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527097 5119 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527102 5119 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527106 5119 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527111 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527115 5119 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527119 5119 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527123 5119 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527127 5119 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527131 5119 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527136 5119 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527140 5119 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527144 5119 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527148 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527152 5119 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527156 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527170 5119 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527175 5119 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527181 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527186 5119 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527192 5119 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527197 5119 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527203 5119 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527208 5119 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527213 5119 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.527217 5119 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.527226 5119 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.528212 5119 server.go:962] "Client rotation is on, will bootstrap in background" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.533091 5119 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.541150 5119 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.541351 5119 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.542833 5119 server.go:1019] "Starting client certificate rotation" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.542999 5119 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.546143 5119 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.584026 5119 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.587599 5119 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.589029 5119 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.605633 5119 log.go:25] "Validated CRI v1 runtime API" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.668253 5119 log.go:25] "Validated CRI v1 image API" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.671267 5119 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.675227 5119 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-02-20-00-02-34-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.675327 5119 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.690294 5119 manager.go:217] Machine: {Timestamp:2026-02-20 00:10:18.688641772 +0000 UTC m=+0.667606084 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:0b11b3ff-8b58-4601-b700-d0d714919b4e BootID:301425df-f98b-4e7d-a726-c87ed89cc7b9 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0b:c0:5c Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0b:c0:5c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a9:e7:dc Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b1:0c:8f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:3c:66:4f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:13:4f:b0 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:de:c6:4c:02:67:71 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:96:ab:0c:8d:79 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.691408 5119 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.691898 5119 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.695296 5119 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.695407 5119 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.695716 5119 topology_manager.go:138] "Creating topology manager with none policy" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.695773 5119 container_manager_linux.go:306] "Creating device plugin manager" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.695841 5119 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.696908 5119 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.697970 5119 state_mem.go:36] "Initialized new in-memory state store" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.698237 5119 server.go:1267] "Using root directory" path="/var/lib/kubelet" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.702598 5119 kubelet.go:491] "Attempting to sync node with API server" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.702695 5119 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.702778 5119 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.702868 5119 kubelet.go:397] "Adding apiserver pod source" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.704268 5119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.709030 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.709114 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.709737 5119 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.709783 5119 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.711698 5119 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.711744 5119 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.715847 5119 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.716236 5119 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.717164 5119 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.718341 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.718404 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.718426 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.718441 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.718456 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.718471 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.718486 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.718500 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.718517 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.718588 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.718620 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.719123 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.720407 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.720436 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.723166 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.748958 5119 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.749143 5119 server.go:1295] "Started kubelet" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.749378 5119 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.749575 5119 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.749658 5119 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.750255 5119 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 20 00:10:18 crc systemd[1]: Started Kubernetes Kubelet. Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.752476 5119 server.go:317] "Adding debug handlers to kubelet server" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.753531 5119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895cbe65fb9db95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.749090709 +0000 UTC m=+0.728055001,LastTimestamp:2026-02-20 00:10:18.749090709 +0000 UTC m=+0.728055001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.755787 5119 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.756286 5119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.759672 5119 volume_manager.go:295] "The desired_state_of_world populator starts" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.759710 5119 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.759776 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.759794 5119 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.761282 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.761965 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.767672 5119 factory.go:55] Registering systemd factory Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.767743 5119 factory.go:223] Registration of the systemd container factory successfully Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.769289 5119 factory.go:153] Registering CRI-O factory Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.769328 5119 factory.go:223] Registration of the crio container factory successfully Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.769482 5119 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.769524 5119 factory.go:103] Registering Raw factory Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.769591 5119 manager.go:1196] Started watching for new ooms in manager Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.770897 5119 manager.go:319] Starting recovery of all containers Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.780354 5119 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.825199 5119 manager.go:324] Recovery completed Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829519 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829619 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829633 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829646 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829659 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829670 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829680 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829691 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829704 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829714 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829725 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829736 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829746 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829756 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829771 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829781 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829791 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829803 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829814 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829823 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829838 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829848 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829860 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829869 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829879 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829889 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829899 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829910 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829925 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829934 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829948 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829959 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.829990 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.830001 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.830011 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.830022 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.830037 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.830048 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.830060 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.830071 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.830081 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839062 5119 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839172 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839203 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839230 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839247 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839263 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839286 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839306 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839330 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839350 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839373 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839390 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839620 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839644 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839663 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839700 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839764 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839787 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839811 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839831 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839854 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839870 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839887 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839913 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839929 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839948 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839962 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.839984 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840000 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840015 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840033 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840047 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840066 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840081 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840100 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840370 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840425 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840450 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840474 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840490 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840508 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840521 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840537 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840563 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840581 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840593 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840608 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840628 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840642 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840673 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840688 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840706 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840725 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840744 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840766 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840783 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840819 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840834 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840849 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840866 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840880 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840897 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840910 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840927 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840946 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840961 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840983 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.840998 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841017 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841031 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841050 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841066 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841120 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841136 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841149 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841162 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841179 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841195 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841211 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841225 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841242 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841260 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841274 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841291 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841306 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841323 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841337 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841349 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841366 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841379 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841394 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841406 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841423 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841437 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841453 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841469 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841483 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841500 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841518 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841534 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841565 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841578 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841594 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841607 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841625 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841640 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841655 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841669 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841684 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841703 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841719 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841737 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841754 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841770 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841785 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841798 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841817 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841831 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841846 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841861 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841874 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841892 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841904 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841919 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841931 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841944 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841958 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841971 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.841987 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842000 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842017 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842028 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842044 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842056 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842068 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842081 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842093 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842108 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842120 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842134 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842149 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842162 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842178 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842190 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842206 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842217 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842229 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842243 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842257 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842271 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842283 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842296 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842310 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842324 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842341 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842356 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842371 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842383 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842438 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842451 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842463 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842477 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842491 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842506 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842520 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842532 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842596 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842608 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842623 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842634 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842648 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842662 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842676 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842691 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842703 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842717 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842729 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842797 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842819 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842838 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842851 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842865 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842877 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842890 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842956 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842972 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842983 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.842997 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843007 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843019 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843033 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843047 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843061 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843072 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843089 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843101 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843112 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843144 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843157 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843172 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843183 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843196 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843207 5119 reconstruct.go:97] "Volume reconstruction finished" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.843214 5119 reconciler.go:26] "Reconciler: start to sync state" Feb 20 00:10:18 crc kubenswrapper[5119]: W0220 00:10:18.848216 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/system.slice/ocp-cluster-ca.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/system.slice/ocp-cluster-ca.service: no such file or directory Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.854625 5119 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.854683 5119 status_manager.go:230] "Starting to sync pod status with apiserver" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.854720 5119 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.854733 5119 kubelet.go:2451] "Starting kubelet main sync loop" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.854782 5119 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.855566 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.856767 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.859613 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.859663 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.859679 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.859849 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.862371 5119 cpu_manager.go:222] "Starting CPU manager" policy="none" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.862392 5119 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.862418 5119 state_mem.go:36] "Initialized new in-memory state store" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.870466 5119 policy_none.go:49] "None policy: Start" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.870503 5119 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.870521 5119 state_mem.go:35] "Initializing new in-memory state store" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.923651 5119 manager.go:341] "Starting Device Plugin manager" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.923952 5119 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.923974 5119 server.go:85] "Starting device plugin registration server" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.924825 5119 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.924847 5119 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.925023 5119 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.925117 5119 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.925129 5119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.930010 5119 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.930063 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.954879 5119 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.955079 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.955917 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.955983 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.956002 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.957159 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.957314 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.957393 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.957901 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.957973 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.957990 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.958113 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.958158 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.958173 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.959066 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.959192 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.959245 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.959900 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.959927 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.959898 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.959970 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.959984 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.959940 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.960816 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.961165 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.961211 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.961472 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.961536 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.961568 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.961693 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.961720 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.961731 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.962318 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.962669 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.962714 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:18 crc kubenswrapper[5119]: E0220 00:10:18.962867 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.962951 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.962983 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.962994 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.963281 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.963314 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.963325 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.963679 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.963715 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.964455 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.964500 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:18 crc kubenswrapper[5119]: I0220 00:10:18.964515 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.008671 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.019200 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.025018 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.026249 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.026301 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.026314 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.026355 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.027110 5119 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.037208 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.049243 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.049300 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.049333 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.049359 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.049384 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050113 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050152 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050177 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050215 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050265 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050402 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050451 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050472 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050491 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050509 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050529 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.050660 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.051017 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.051084 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.051183 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.051233 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.051205 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.051831 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.052116 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.057329 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.065351 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.152113 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.152207 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.152668 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.152711 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.152758 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.152800 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.152828 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.152921 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153079 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153173 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153222 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153237 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153223 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153286 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153319 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153297 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153352 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153346 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153399 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153416 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153433 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153452 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153470 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153486 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153517 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153602 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153641 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153678 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153720 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.153752 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.227494 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.229064 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.229137 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.229152 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.229186 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.229958 5119 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.255213 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.255284 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.255324 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.255335 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.255366 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.255432 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.255487 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.255498 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.310482 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.320661 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.338835 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.357887 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: W0220 00:10:19.360203 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-3959ab5e0528f925b99c0decdee8a8b5497ac2acd1eef346ce85ec37848dde20 WatchSource:0}: Error finding container 3959ab5e0528f925b99c0decdee8a8b5497ac2acd1eef346ce85ec37848dde20: Status 404 returned error can't find the container with id 3959ab5e0528f925b99c0decdee8a8b5497ac2acd1eef346ce85ec37848dde20 Feb 20 00:10:19 crc kubenswrapper[5119]: W0220 00:10:19.364272 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-eeeb41dd09ecd8e8e40f3fd56579523fb58345cf1006f0850c40eee5e8287c46 WatchSource:0}: Error finding container eeeb41dd09ecd8e8e40f3fd56579523fb58345cf1006f0850c40eee5e8287c46: Status 404 returned error can't find the container with id eeeb41dd09ecd8e8e40f3fd56579523fb58345cf1006f0850c40eee5e8287c46 Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.364297 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.365695 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.380740 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 00:10:19 crc kubenswrapper[5119]: W0220 00:10:19.393284 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-992a0eeaa63fbb3848bc51553bbd8f0b9f423a9752d492d78c82e71bb6fbc3a7 WatchSource:0}: Error finding container 992a0eeaa63fbb3848bc51553bbd8f0b9f423a9752d492d78c82e71bb6fbc3a7: Status 404 returned error can't find the container with id 992a0eeaa63fbb3848bc51553bbd8f0b9f423a9752d492d78c82e71bb6fbc3a7 Feb 20 00:10:19 crc kubenswrapper[5119]: W0220 00:10:19.402429 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-6ff0e4305ba5df9e9ad11066ac2430106900ed15f6971f431ec46f2b8068fb91 WatchSource:0}: Error finding container 6ff0e4305ba5df9e9ad11066ac2430106900ed15f6971f431ec46f2b8068fb91: Status 404 returned error can't find the container with id 6ff0e4305ba5df9e9ad11066ac2430106900ed15f6971f431ec46f2b8068fb91 Feb 20 00:10:19 crc kubenswrapper[5119]: W0220 00:10:19.405371 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-eefc026655c4568deae15af37058a50af0650f2f3c2319d38545ac38aa9776dc WatchSource:0}: Error finding container eefc026655c4568deae15af37058a50af0650f2f3c2319d38545ac38aa9776dc: Status 404 returned error can't find the container with id eefc026655c4568deae15af37058a50af0650f2f3c2319d38545ac38aa9776dc Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.630873 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.632676 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.632719 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.632731 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.632760 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.633234 5119 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.670901 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.724410 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.806496 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.867092 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"eefc026655c4568deae15af37058a50af0650f2f3c2319d38545ac38aa9776dc"} Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.868664 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6ff0e4305ba5df9e9ad11066ac2430106900ed15f6971f431ec46f2b8068fb91"} Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.870706 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"992a0eeaa63fbb3848bc51553bbd8f0b9f423a9752d492d78c82e71bb6fbc3a7"} Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.872053 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"eeeb41dd09ecd8e8e40f3fd56579523fb58345cf1006f0850c40eee5e8287c46"} Feb 20 00:10:19 crc kubenswrapper[5119]: I0220 00:10:19.873263 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"3959ab5e0528f925b99c0decdee8a8b5497ac2acd1eef346ce85ec37848dde20"} Feb 20 00:10:19 crc kubenswrapper[5119]: E0220 00:10:19.885309 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:10:20 crc kubenswrapper[5119]: E0220 00:10:20.165089 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Feb 20 00:10:20 crc kubenswrapper[5119]: E0220 00:10:20.327225 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.433980 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.435858 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.435921 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.435941 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.435984 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:10:20 crc kubenswrapper[5119]: E0220 00:10:20.436732 5119 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.724985 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.753013 5119 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 20 00:10:20 crc kubenswrapper[5119]: E0220 00:10:20.754432 5119 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.880640 5119 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="93a050e7bbe89c9df64bd2d4f49af71665c0e01049e964897414a8ba45dabf06" exitCode=0 Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.880910 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"93a050e7bbe89c9df64bd2d4f49af71665c0e01049e964897414a8ba45dabf06"} Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.883270 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.885927 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.885977 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.885993 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:20 crc kubenswrapper[5119]: E0220 00:10:20.886384 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.886738 5119 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="687a28d8696979239e51443720daad7275b73f5e8a04f2c4ba1625ca22149724" exitCode=0 Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.886957 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"687a28d8696979239e51443720daad7275b73f5e8a04f2c4ba1625ca22149724"} Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.887037 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.888324 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.888372 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.888446 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:20 crc kubenswrapper[5119]: E0220 00:10:20.888853 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.889882 5119 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="07c89f665f0945400baa411dbc321ad8bb30b8661b4f89381364b6cea6c3c817" exitCode=0 Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.889961 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"07c89f665f0945400baa411dbc321ad8bb30b8661b4f89381364b6cea6c3c817"} Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.890108 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.891494 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.891527 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.891593 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:20 crc kubenswrapper[5119]: E0220 00:10:20.891798 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.894446 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"21a886847fb1015eec09cff82cb000eef0affc5c81aea9cb3247b939a42c96f6"} Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.894512 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ac6691d283aafa26b76bde45116abaa5c977877e0a499e7f032c8ecbb241d0cf"} Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.896409 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605" exitCode=0 Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.896456 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605"} Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.896706 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.897630 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.897677 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.897692 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:20 crc kubenswrapper[5119]: E0220 00:10:20.897975 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.900049 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.900775 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.900818 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:20 crc kubenswrapper[5119]: I0220 00:10:20.900832 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.723892 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 20 00:10:21 crc kubenswrapper[5119]: E0220 00:10:21.766898 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="3.2s" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.909614 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"4ca51d552b077446f07780f028dbef71472913f569131068f1107adf6ca6b25f"} Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.909793 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.911479 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.911510 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.911519 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:21 crc kubenswrapper[5119]: E0220 00:10:21.911815 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.916584 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"bada530ab3542294459927d7a185cd77313350302b7e29d8901ce07eeea3a472"} Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.916613 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b492ac1d6abbc4a4afef8b9614505249ae7efcd38c8fcd48a0c97f1e392c2e8a"} Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.916632 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c382bde8ab381f15e8e6ca6875bd303dfc59d8c2c14de3de4a0e317a888901d9"} Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.916735 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.917470 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.917492 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.917501 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:21 crc kubenswrapper[5119]: E0220 00:10:21.917671 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.920195 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"fdc287b96e8bcf9191f28a271ee71230df2b1cbc9085c1f59a2bbaf52da9fe89"} Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.920275 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3a6b81a15ece4cea96f108cd94199159531a44f069002237b6cb4f7239716d4e"} Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.920451 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.921068 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.921156 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.921225 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:21 crc kubenswrapper[5119]: E0220 00:10:21.921408 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.923596 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c"} Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.923677 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb"} Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.923931 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7"} Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.925265 5119 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="14a5c1db348a6c359e6211876a93e9f3162c8962610e30000970a311eb974948" exitCode=0 Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.925445 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"14a5c1db348a6c359e6211876a93e9f3162c8962610e30000970a311eb974948"} Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.925634 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.926100 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.926173 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:21 crc kubenswrapper[5119]: I0220 00:10:21.926238 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:21 crc kubenswrapper[5119]: E0220 00:10:21.926612 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.036856 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.038453 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.038563 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.038575 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.038606 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:10:22 crc kubenswrapper[5119]: E0220 00:10:22.039217 5119 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 20 00:10:22 crc kubenswrapper[5119]: E0220 00:10:22.220947 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:10:22 crc kubenswrapper[5119]: E0220 00:10:22.267150 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.346509 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.770021 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.781396 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.808133 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.932891 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b542f2ecea8d7e33394975fb1b2ee41c1eb356fdab06e5b5584d6f661c56c30e"} Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.932967 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b"} Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.933220 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.934076 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.934132 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.934155 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:22 crc kubenswrapper[5119]: E0220 00:10:22.934506 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.937532 5119 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="625e626759f6cd69ada64fc36c5a5d994b1b46a7b2495c2aabd1ccefbc71d93c" exitCode=0 Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.937697 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.937830 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.937882 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.937967 5119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938054 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"625e626759f6cd69ada64fc36c5a5d994b1b46a7b2495c2aabd1ccefbc71d93c"} Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938086 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938461 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938516 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938577 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938658 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938700 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938718 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938781 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938847 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938882 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938938 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:22 crc kubenswrapper[5119]: E0220 00:10:22.938961 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:22 crc kubenswrapper[5119]: E0220 00:10:22.939058 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.938968 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:22 crc kubenswrapper[5119]: I0220 00:10:22.939226 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:22 crc kubenswrapper[5119]: E0220 00:10:22.939611 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:22 crc kubenswrapper[5119]: E0220 00:10:22.939758 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.925789 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.945512 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"24cfd3f7eadbcc71f1db924139cf5009452536da3cb89e686032ce703577be4a"} Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.945609 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c70b091e898caee84f634a04ecdd706530864da43fe24e1463f6db14172dd6ae"} Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.945639 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b6ebc5f8bbc7fbe8b2df8925c5e948ed178d1bebbf72ed38d16b595cb57c3168"} Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.945695 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.945862 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.946471 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.946565 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.946601 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.946606 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.946615 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:23 crc kubenswrapper[5119]: I0220 00:10:23.946645 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:23 crc kubenswrapper[5119]: E0220 00:10:23.947078 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:23 crc kubenswrapper[5119]: E0220 00:10:23.947921 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:24 crc kubenswrapper[5119]: I0220 00:10:24.946910 5119 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 20 00:10:24 crc kubenswrapper[5119]: I0220 00:10:24.955746 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4c04c59d79a01688cc5bf8aeefc1e779191d870147e49cc4215f610dc453f476"} Feb 20 00:10:24 crc kubenswrapper[5119]: I0220 00:10:24.955855 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:24 crc kubenswrapper[5119]: I0220 00:10:24.955866 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9410342c809c3092f734e0f6d40993c834cede3ba6bec4a3eac0bfe03b0769de"} Feb 20 00:10:24 crc kubenswrapper[5119]: I0220 00:10:24.955977 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:24 crc kubenswrapper[5119]: I0220 00:10:24.956659 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:24 crc kubenswrapper[5119]: I0220 00:10:24.956719 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:24 crc kubenswrapper[5119]: I0220 00:10:24.956736 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:24 crc kubenswrapper[5119]: I0220 00:10:24.956823 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:24 crc kubenswrapper[5119]: I0220 00:10:24.956858 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:24 crc kubenswrapper[5119]: I0220 00:10:24.956873 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:24 crc kubenswrapper[5119]: E0220 00:10:24.957234 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:24 crc kubenswrapper[5119]: E0220 00:10:24.957687 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.239650 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.241293 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.241395 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.241435 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.241502 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.844991 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.958713 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.959517 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.959582 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.959595 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:25 crc kubenswrapper[5119]: E0220 00:10:25.959979 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.976314 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.976498 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.977251 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.977280 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:25 crc kubenswrapper[5119]: I0220 00:10:25.977289 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:25 crc kubenswrapper[5119]: E0220 00:10:25.977600 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:26 crc kubenswrapper[5119]: I0220 00:10:26.268149 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:26 crc kubenswrapper[5119]: I0220 00:10:26.268517 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:26 crc kubenswrapper[5119]: I0220 00:10:26.269958 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:26 crc kubenswrapper[5119]: I0220 00:10:26.270033 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:26 crc kubenswrapper[5119]: I0220 00:10:26.270053 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:26 crc kubenswrapper[5119]: E0220 00:10:26.270779 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:26 crc kubenswrapper[5119]: I0220 00:10:26.962412 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:26 crc kubenswrapper[5119]: I0220 00:10:26.963433 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:26 crc kubenswrapper[5119]: I0220 00:10:26.963530 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:26 crc kubenswrapper[5119]: I0220 00:10:26.963591 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:26 crc kubenswrapper[5119]: E0220 00:10:26.964430 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:27 crc kubenswrapper[5119]: I0220 00:10:27.545042 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:27 crc kubenswrapper[5119]: I0220 00:10:27.545400 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:27 crc kubenswrapper[5119]: I0220 00:10:27.546468 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:27 crc kubenswrapper[5119]: I0220 00:10:27.546527 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:27 crc kubenswrapper[5119]: I0220 00:10:27.546574 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:27 crc kubenswrapper[5119]: E0220 00:10:27.547138 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:27 crc kubenswrapper[5119]: I0220 00:10:27.589150 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Feb 20 00:10:27 crc kubenswrapper[5119]: I0220 00:10:27.965632 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:27 crc kubenswrapper[5119]: I0220 00:10:27.967004 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:27 crc kubenswrapper[5119]: I0220 00:10:27.967086 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:27 crc kubenswrapper[5119]: I0220 00:10:27.967107 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:27 crc kubenswrapper[5119]: E0220 00:10:27.967926 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:28 crc kubenswrapper[5119]: E0220 00:10:28.930357 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:10:30 crc kubenswrapper[5119]: I0220 00:10:30.870619 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:30 crc kubenswrapper[5119]: I0220 00:10:30.871011 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:30 crc kubenswrapper[5119]: I0220 00:10:30.872576 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:30 crc kubenswrapper[5119]: I0220 00:10:30.872645 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:30 crc kubenswrapper[5119]: I0220 00:10:30.872667 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:30 crc kubenswrapper[5119]: E0220 00:10:30.873230 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:30 crc kubenswrapper[5119]: I0220 00:10:30.880249 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:30 crc kubenswrapper[5119]: I0220 00:10:30.974300 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:30 crc kubenswrapper[5119]: I0220 00:10:30.975223 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:30 crc kubenswrapper[5119]: I0220 00:10:30.975263 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:30 crc kubenswrapper[5119]: I0220 00:10:30.975277 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:30 crc kubenswrapper[5119]: E0220 00:10:30.975689 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:32 crc kubenswrapper[5119]: I0220 00:10:32.725871 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 20 00:10:32 crc kubenswrapper[5119]: I0220 00:10:32.727214 5119 trace.go:236] Trace[1386348009]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Feb-2026 00:10:22.725) (total time: 10001ms): Feb 20 00:10:32 crc kubenswrapper[5119]: Trace[1386348009]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:10:32.727) Feb 20 00:10:32 crc kubenswrapper[5119]: Trace[1386348009]: [10.001977801s] [10.001977801s] END Feb 20 00:10:32 crc kubenswrapper[5119]: E0220 00:10:32.727284 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:10:32 crc kubenswrapper[5119]: I0220 00:10:32.918040 5119 trace.go:236] Trace[521602247]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Feb-2026 00:10:22.916) (total time: 10001ms): Feb 20 00:10:32 crc kubenswrapper[5119]: Trace[521602247]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:10:32.917) Feb 20 00:10:32 crc kubenswrapper[5119]: Trace[521602247]: [10.001592804s] [10.001592804s] END Feb 20 00:10:32 crc kubenswrapper[5119]: E0220 00:10:32.918096 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:10:33 crc kubenswrapper[5119]: I0220 00:10:33.871505 5119 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 00:10:33 crc kubenswrapper[5119]: I0220 00:10:33.871647 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 00:10:33 crc kubenswrapper[5119]: I0220 00:10:33.894882 5119 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 20 00:10:33 crc kubenswrapper[5119]: I0220 00:10:33.895258 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 20 00:10:33 crc kubenswrapper[5119]: I0220 00:10:33.900846 5119 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 20 00:10:33 crc kubenswrapper[5119]: I0220 00:10:33.901045 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 20 00:10:34 crc kubenswrapper[5119]: E0220 00:10:34.968362 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.887185 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.887667 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.889164 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.889232 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.889254 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:35 crc kubenswrapper[5119]: E0220 00:10:35.890122 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.911697 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.986217 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.986685 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.987638 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.987720 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.987737 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:35 crc kubenswrapper[5119]: E0220 00:10:35.988384 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.988601 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.989717 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.989782 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.989808 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:35 crc kubenswrapper[5119]: E0220 00:10:35.990593 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:35 crc kubenswrapper[5119]: I0220 00:10:35.993357 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:36 crc kubenswrapper[5119]: E0220 00:10:36.767690 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:10:36 crc kubenswrapper[5119]: I0220 00:10:36.991183 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:36 crc kubenswrapper[5119]: I0220 00:10:36.992172 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:36 crc kubenswrapper[5119]: I0220 00:10:36.992232 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:36 crc kubenswrapper[5119]: I0220 00:10:36.992285 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:36 crc kubenswrapper[5119]: E0220 00:10:36.992752 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:37 crc kubenswrapper[5119]: E0220 00:10:37.311277 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:10:38 crc kubenswrapper[5119]: I0220 00:10:38.903899 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.904019 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe65fb9db95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.749090709 +0000 UTC m=+0.728055001,LastTimestamp:2026-02-20 00:10:18.749090709 +0000 UTC m=+0.728055001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: I0220 00:10:38.904249 5119 trace.go:236] Trace[68570922]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Feb-2026 00:10:27.727) (total time: 11177ms): Feb 20 00:10:38 crc kubenswrapper[5119]: Trace[68570922]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 11177ms (00:10:38.904) Feb 20 00:10:38 crc kubenswrapper[5119]: Trace[68570922]: [11.177065965s] [11.177065965s] END Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.904319 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:10:38 crc kubenswrapper[5119]: I0220 00:10:38.909326 5119 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 20 00:10:38 crc kubenswrapper[5119]: I0220 00:10:38.912806 5119 trace.go:236] Trace[2057917384]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Feb-2026 00:10:27.344) (total time: 11567ms): Feb 20 00:10:38 crc kubenswrapper[5119]: Trace[2057917384]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 11567ms (00:10:38.912) Feb 20 00:10:38 crc kubenswrapper[5119]: Trace[2057917384]: [11.567982612s] [11.567982612s] END Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.912846 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.912835 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe66650c9a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859645346 +0000 UTC m=+0.838609638,LastTimestamp:2026-02-20 00:10:18.859645346 +0000 UTC m=+0.838609638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.914071 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.917440 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666512dad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859670957 +0000 UTC m=+0.838635249,LastTimestamp:2026-02-20 00:10:18.859670957 +0000 UTC m=+0.838635249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.921644 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666516bc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859686857 +0000 UTC m=+0.838651149,LastTimestamp:2026-02-20 00:10:18.859686857 +0000 UTC m=+0.838651149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.926757 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe66a489f58 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.926219096 +0000 UTC m=+0.905183388,LastTimestamp:2026-02-20 00:10:18.926219096 +0000 UTC m=+0.905183388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.931121 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.932815 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe66650c9a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe66650c9a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859645346 +0000 UTC m=+0.838609638,LastTimestamp:2026-02-20 00:10:18.95595909 +0000 UTC m=+0.934923402,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.937041 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666512dad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666512dad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859670957 +0000 UTC m=+0.838635249,LastTimestamp:2026-02-20 00:10:18.955993081 +0000 UTC m=+0.934957393,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.949642 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666516bc9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666516bc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859686857 +0000 UTC m=+0.838651149,LastTimestamp:2026-02-20 00:10:18.956013082 +0000 UTC m=+0.934977384,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.961284 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe66650c9a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe66650c9a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859645346 +0000 UTC m=+0.838609638,LastTimestamp:2026-02-20 00:10:18.957941159 +0000 UTC m=+0.936905461,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: I0220 00:10:38.966133 5119 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58024->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 20 00:10:38 crc kubenswrapper[5119]: I0220 00:10:38.966290 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58024->192.168.126.11:17697: read: connection reset by peer" Feb 20 00:10:38 crc kubenswrapper[5119]: I0220 00:10:38.966696 5119 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36424->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 20 00:10:38 crc kubenswrapper[5119]: I0220 00:10:38.966816 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36424->192.168.126.11:17697: read: connection reset by peer" Feb 20 00:10:38 crc kubenswrapper[5119]: I0220 00:10:38.967280 5119 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 20 00:10:38 crc kubenswrapper[5119]: I0220 00:10:38.967390 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.971693 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666512dad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666512dad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859670957 +0000 UTC m=+0.838635249,LastTimestamp:2026-02-20 00:10:18.95798306 +0000 UTC m=+0.936947362,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.977448 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666516bc9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666516bc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859686857 +0000 UTC m=+0.838651149,LastTimestamp:2026-02-20 00:10:18.95799721 +0000 UTC m=+0.936961522,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.985351 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe66650c9a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe66650c9a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859645346 +0000 UTC m=+0.838609638,LastTimestamp:2026-02-20 00:10:18.958134725 +0000 UTC m=+0.937099017,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:38 crc kubenswrapper[5119]: E0220 00:10:38.993799 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666512dad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666512dad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859670957 +0000 UTC m=+0.838635249,LastTimestamp:2026-02-20 00:10:18.958165636 +0000 UTC m=+0.937129928,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.003784 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666516bc9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666516bc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859686857 +0000 UTC m=+0.838651149,LastTimestamp:2026-02-20 00:10:18.958178936 +0000 UTC m=+0.937143228,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.012063 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe66650c9a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe66650c9a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859645346 +0000 UTC m=+0.838609638,LastTimestamp:2026-02-20 00:10:18.959919418 +0000 UTC m=+0.938883710,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.020388 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666512dad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666512dad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859670957 +0000 UTC m=+0.838635249,LastTimestamp:2026-02-20 00:10:18.959934648 +0000 UTC m=+0.938898940,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.025428 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe66650c9a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe66650c9a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859645346 +0000 UTC m=+0.838609638,LastTimestamp:2026-02-20 00:10:18.959958029 +0000 UTC m=+0.938922331,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.031708 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666512dad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666512dad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859670957 +0000 UTC m=+0.838635249,LastTimestamp:2026-02-20 00:10:18.95997873 +0000 UTC m=+0.938943032,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.036237 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666516bc9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666516bc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859686857 +0000 UTC m=+0.838651149,LastTimestamp:2026-02-20 00:10:18.95999018 +0000 UTC m=+0.938954472,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.040276 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666516bc9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666516bc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859686857 +0000 UTC m=+0.838651149,LastTimestamp:2026-02-20 00:10:18.960023961 +0000 UTC m=+0.938988273,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.048953 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe66650c9a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe66650c9a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859645346 +0000 UTC m=+0.838609638,LastTimestamp:2026-02-20 00:10:18.961500215 +0000 UTC m=+0.940464517,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.056791 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666512dad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666512dad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859670957 +0000 UTC m=+0.838635249,LastTimestamp:2026-02-20 00:10:18.961561057 +0000 UTC m=+0.940525359,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.060757 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666516bc9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666516bc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859686857 +0000 UTC m=+0.838651149,LastTimestamp:2026-02-20 00:10:18.961575447 +0000 UTC m=+0.940539749,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.064488 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe66650c9a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe66650c9a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859645346 +0000 UTC m=+0.838609638,LastTimestamp:2026-02-20 00:10:18.961709081 +0000 UTC m=+0.940673383,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.069731 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbe666512dad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbe666512dad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:18.859670957 +0000 UTC m=+0.838635249,LastTimestamp:2026-02-20 00:10:18.961725661 +0000 UTC m=+0.940689953,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.077149 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe68568a5f9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:19.381302777 +0000 UTC m=+1.360267099,LastTimestamp:2026-02-20 00:10:19.381302777 +0000 UTC m=+1.360267099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.081875 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe6856b4f34 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:19.381477172 +0000 UTC m=+1.360441504,LastTimestamp:2026-02-20 00:10:19.381477172 +0000 UTC m=+1.360441504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.086343 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe68687112a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:19.400073514 +0000 UTC m=+1.379037846,LastTimestamp:2026-02-20 00:10:19.400073514 +0000 UTC m=+1.379037846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.090896 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe68713f059 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:19.409305689 +0000 UTC m=+1.388270011,LastTimestamp:2026-02-20 00:10:19.409305689 +0000 UTC m=+1.388270011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.094834 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbe687167630 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:19.409471024 +0000 UTC m=+1.388435356,LastTimestamp:2026-02-20 00:10:19.409471024 +0000 UTC m=+1.388435356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.098723 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe6b5d82935 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.193917237 +0000 UTC m=+2.172881529,LastTimestamp:2026-02-20 00:10:20.193917237 +0000 UTC m=+2.172881529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.102582 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe6b5d9d3c8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.19402644 +0000 UTC m=+2.172990742,LastTimestamp:2026-02-20 00:10:20.19402644 +0000 UTC m=+2.172990742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.106259 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe6b5db1140 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.194107712 +0000 UTC m=+2.173072004,LastTimestamp:2026-02-20 00:10:20.194107712 +0000 UTC m=+2.173072004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.110553 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbe6b5dc8b81 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.194204545 +0000 UTC m=+2.173168837,LastTimestamp:2026-02-20 00:10:20.194204545 +0000 UTC m=+2.173168837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.118746 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe6b66effc1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.203802561 +0000 UTC m=+2.182766863,LastTimestamp:2026-02-20 00:10:20.203802561 +0000 UTC m=+2.182766863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.125081 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe6b69f15c5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.206953925 +0000 UTC m=+2.185918237,LastTimestamp:2026-02-20 00:10:20.206953925 +0000 UTC m=+2.185918237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.129924 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe6b6b71348 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.208526152 +0000 UTC m=+2.187490454,LastTimestamp:2026-02-20 00:10:20.208526152 +0000 UTC m=+2.187490454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.137106 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe6b6c8d2c5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.209689285 +0000 UTC m=+2.188653587,LastTimestamp:2026-02-20 00:10:20.209689285 +0000 UTC m=+2.188653587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.140797 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe6b6f00fa4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.212260772 +0000 UTC m=+2.191225074,LastTimestamp:2026-02-20 00:10:20.212260772 +0000 UTC m=+2.191225074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.144933 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbe6b70beb57 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.214086487 +0000 UTC m=+2.193050779,LastTimestamp:2026-02-20 00:10:20.214086487 +0000 UTC m=+2.193050779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.149421 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe6b743addd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.217740765 +0000 UTC m=+2.196705067,LastTimestamp:2026-02-20 00:10:20.217740765 +0000 UTC m=+2.196705067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.155240 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe6c6f2863a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.480857658 +0000 UTC m=+2.459821950,LastTimestamp:2026-02-20 00:10:20.480857658 +0000 UTC m=+2.459821950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.162153 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe6c7da72d0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.49605704 +0000 UTC m=+2.475021342,LastTimestamp:2026-02-20 00:10:20.49605704 +0000 UTC m=+2.475021342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.167413 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe6c7e82af3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.496956147 +0000 UTC m=+2.475920439,LastTimestamp:2026-02-20 00:10:20.496956147 +0000 UTC m=+2.475920439,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.174191 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe6df325713 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.887693075 +0000 UTC m=+2.866657377,LastTimestamp:2026-02-20 00:10:20.887693075 +0000 UTC m=+2.866657377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.179367 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbe6df5bf480 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.890420352 +0000 UTC m=+2.869384654,LastTimestamp:2026-02-20 00:10:20.890420352 +0000 UTC m=+2.869384654,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.185442 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe6df7f1542 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.892722498 +0000 UTC m=+2.871686830,LastTimestamp:2026-02-20 00:10:20.892722498 +0000 UTC m=+2.871686830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.191283 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe6dfebf3c8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:20.899857352 +0000 UTC m=+2.878821644,LastTimestamp:2026-02-20 00:10:20.899857352 +0000 UTC m=+2.878821644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.198326 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe6f0d6e32c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.183689516 +0000 UTC m=+3.162653808,LastTimestamp:2026-02-20 00:10:21.183689516 +0000 UTC m=+3.162653808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.204148 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe6f0f94f58 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.185945432 +0000 UTC m=+3.164909724,LastTimestamp:2026-02-20 00:10:21.185945432 +0000 UTC m=+3.164909724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.209682 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbe6f17daf34 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.194620724 +0000 UTC m=+3.173585016,LastTimestamp:2026-02-20 00:10:21.194620724 +0000 UTC m=+3.173585016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.215511 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe6f1949ae3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.196122851 +0000 UTC m=+3.175087153,LastTimestamp:2026-02-20 00:10:21.196122851 +0000 UTC m=+3.175087153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.224113 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe6f1d4d246 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.200331334 +0000 UTC m=+3.179295626,LastTimestamp:2026-02-20 00:10:21.200331334 +0000 UTC m=+3.179295626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.232513 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe6f1e8788d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.201619085 +0000 UTC m=+3.180583377,LastTimestamp:2026-02-20 00:10:21.201619085 +0000 UTC m=+3.180583377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.233726 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbe6f231729c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.206401692 +0000 UTC m=+3.185365984,LastTimestamp:2026-02-20 00:10:21.206401692 +0000 UTC m=+3.185365984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.237681 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe6f2535385 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.208621957 +0000 UTC m=+3.187586259,LastTimestamp:2026-02-20 00:10:21.208621957 +0000 UTC m=+3.187586259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.242667 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe6f28c6e6f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.212364399 +0000 UTC m=+3.191328691,LastTimestamp:2026-02-20 00:10:21.212364399 +0000 UTC m=+3.191328691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.248072 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe6f2a7a585 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.214147973 +0000 UTC m=+3.193112265,LastTimestamp:2026-02-20 00:10:21.214147973 +0000 UTC m=+3.193112265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.253592 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe6f9254893 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.323045011 +0000 UTC m=+3.302009303,LastTimestamp:2026-02-20 00:10:21.323045011 +0000 UTC m=+3.302009303,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.259527 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe6fa13a689 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.338666633 +0000 UTC m=+3.317630925,LastTimestamp:2026-02-20 00:10:21.338666633 +0000 UTC m=+3.317630925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.268992 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe6fa390b31 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.341117233 +0000 UTC m=+3.320081525,LastTimestamp:2026-02-20 00:10:21.341117233 +0000 UTC m=+3.320081525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.274699 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe701b234ec openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.466498284 +0000 UTC m=+3.445462576,LastTimestamp:2026-02-20 00:10:21.466498284 +0000 UTC m=+3.445462576,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.279280 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe701f09319 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.470585625 +0000 UTC m=+3.449549917,LastTimestamp:2026-02-20 00:10:21.470585625 +0000 UTC m=+3.449549917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.290831 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe702f4fb3b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.487651643 +0000 UTC m=+3.466615945,LastTimestamp:2026-02-20 00:10:21.487651643 +0000 UTC m=+3.466615945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.296123 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe7030b9077 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.489131639 +0000 UTC m=+3.468095931,LastTimestamp:2026-02-20 00:10:21.489131639 +0000 UTC m=+3.468095931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.301907 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe703557d8a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.493976458 +0000 UTC m=+3.472940750,LastTimestamp:2026-02-20 00:10:21.493976458 +0000 UTC m=+3.472940750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.309187 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe703ae7d7f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.499809151 +0000 UTC m=+3.478773453,LastTimestamp:2026-02-20 00:10:21.499809151 +0000 UTC m=+3.478773453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.320358 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe7096d8681 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.596214913 +0000 UTC m=+3.575179205,LastTimestamp:2026-02-20 00:10:21.596214913 +0000 UTC m=+3.575179205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.328982 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe70ac82c17 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.618932759 +0000 UTC m=+3.597897051,LastTimestamp:2026-02-20 00:10:21.618932759 +0000 UTC m=+3.597897051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.336030 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe710fff312 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.723251474 +0000 UTC m=+3.702215766,LastTimestamp:2026-02-20 00:10:21.723251474 +0000 UTC m=+3.702215766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.341382 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe7110c20b0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.724049584 +0000 UTC m=+3.703013866,LastTimestamp:2026-02-20 00:10:21.724049584 +0000 UTC m=+3.703013866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.345727 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbe712104907 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.741099271 +0000 UTC m=+3.720063563,LastTimestamp:2026-02-20 00:10:21.741099271 +0000 UTC m=+3.720063563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.349769 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe7128afa65 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.749140069 +0000 UTC m=+3.728104361,LastTimestamp:2026-02-20 00:10:21.749140069 +0000 UTC m=+3.728104361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.355924 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe7129cad3f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.750299967 +0000 UTC m=+3.729264249,LastTimestamp:2026-02-20 00:10:21.750299967 +0000 UTC m=+3.729264249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.364582 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe71d311528 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.927798056 +0000 UTC m=+3.906762338,LastTimestamp:2026-02-20 00:10:21.927798056 +0000 UTC m=+3.906762338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.371603 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe720dec315 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:21.989511957 +0000 UTC m=+3.968476249,LastTimestamp:2026-02-20 00:10:21.989511957 +0000 UTC m=+3.968476249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.376752 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe72208b47a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.009037946 +0000 UTC m=+3.988002238,LastTimestamp:2026-02-20 00:10:22.009037946 +0000 UTC m=+3.988002238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.384125 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe72218ac1d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.010084381 +0000 UTC m=+3.989048673,LastTimestamp:2026-02-20 00:10:22.010084381 +0000 UTC m=+3.989048673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.389961 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe72f3e3528 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.230648104 +0000 UTC m=+4.209612396,LastTimestamp:2026-02-20 00:10:22.230648104 +0000 UTC m=+4.209612396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.396837 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe72f473ae7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.231239399 +0000 UTC m=+4.210203691,LastTimestamp:2026-02-20 00:10:22.231239399 +0000 UTC m=+4.210203691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.401731 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe730035297 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.243566231 +0000 UTC m=+4.222530523,LastTimestamp:2026-02-20 00:10:22.243566231 +0000 UTC m=+4.222530523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.406308 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe7306777be openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.250129342 +0000 UTC m=+4.229093634,LastTimestamp:2026-02-20 00:10:22.250129342 +0000 UTC m=+4.229093634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.419674 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe7598fbc7e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.940634238 +0000 UTC m=+4.919598540,LastTimestamp:2026-02-20 00:10:22.940634238 +0000 UTC m=+4.919598540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.426551 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe767c373af openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:23.178904495 +0000 UTC m=+5.157868797,LastTimestamp:2026-02-20 00:10:23.178904495 +0000 UTC m=+5.157868797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.433407 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe768b18709 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:23.194507017 +0000 UTC m=+5.173471349,LastTimestamp:2026-02-20 00:10:23.194507017 +0000 UTC m=+5.173471349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.437276 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe768cb1940 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:23.196182848 +0000 UTC m=+5.175147170,LastTimestamp:2026-02-20 00:10:23.196182848 +0000 UTC m=+5.175147170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.442457 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe778cc5eec openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:23.464701676 +0000 UTC m=+5.443665998,LastTimestamp:2026-02-20 00:10:23.464701676 +0000 UTC m=+5.443665998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.446840 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe77a0c6f44 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:23.48567738 +0000 UTC m=+5.464641712,LastTimestamp:2026-02-20 00:10:23.48567738 +0000 UTC m=+5.464641712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.449384 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe77a2cafff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:23.487791103 +0000 UTC m=+5.466755445,LastTimestamp:2026-02-20 00:10:23.487791103 +0000 UTC m=+5.466755445,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.455876 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe78a340185 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:23.756706181 +0000 UTC m=+5.735670483,LastTimestamp:2026-02-20 00:10:23.756706181 +0000 UTC m=+5.735670483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.460807 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe78b3f5497 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:23.774225559 +0000 UTC m=+5.753189851,LastTimestamp:2026-02-20 00:10:23.774225559 +0000 UTC m=+5.753189851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.466141 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe78b57f69b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:23.775839899 +0000 UTC m=+5.754804211,LastTimestamp:2026-02-20 00:10:23.775839899 +0000 UTC m=+5.754804211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.471767 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe79afc15d6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:24.038254038 +0000 UTC m=+6.017218340,LastTimestamp:2026-02-20 00:10:24.038254038 +0000 UTC m=+6.017218340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.475621 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe79c2f39a9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:24.058382761 +0000 UTC m=+6.037347063,LastTimestamp:2026-02-20 00:10:24.058382761 +0000 UTC m=+6.037347063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.488042 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe79c5728af openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:24.060999855 +0000 UTC m=+6.039964157,LastTimestamp:2026-02-20 00:10:24.060999855 +0000 UTC m=+6.039964157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.496974 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe7ab8a3620 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:24.316003872 +0000 UTC m=+6.294968204,LastTimestamp:2026-02-20 00:10:24.316003872 +0000 UTC m=+6.294968204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.501504 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbe7acb6e46f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:24.335709295 +0000 UTC m=+6.314673587,LastTimestamp:2026-02-20 00:10:24.335709295 +0000 UTC m=+6.314673587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.507410 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 20 00:10:39 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-controller-manager-crc.1895cbe9e5192c24 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 20 00:10:39 crc kubenswrapper[5119]: body: Feb 20 00:10:39 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:33.871608868 +0000 UTC m=+15.850573190,LastTimestamp:2026-02-20 00:10:33.871608868 +0000 UTC m=+15.850573190,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:10:39 crc kubenswrapper[5119]: > Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.511879 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbe9e51aac41 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:33.871707201 +0000 UTC m=+15.850671503,LastTimestamp:2026-02-20 00:10:33.871707201 +0000 UTC m=+15.850671503,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.517976 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 20 00:10:39 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-apiserver-crc.1895cbe9e67eeb1c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 20 00:10:39 crc kubenswrapper[5119]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 20 00:10:39 crc kubenswrapper[5119]: Feb 20 00:10:39 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:33.895054108 +0000 UTC m=+15.874018460,LastTimestamp:2026-02-20 00:10:33.895054108 +0000 UTC m=+15.874018460,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:10:39 crc kubenswrapper[5119]: > Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.525493 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe9e6830538 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:33.895322936 +0000 UTC m=+15.874287268,LastTimestamp:2026-02-20 00:10:33.895322936 +0000 UTC m=+15.874287268,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.529409 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbe9e67eeb1c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 20 00:10:39 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-apiserver-crc.1895cbe9e67eeb1c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 20 00:10:39 crc kubenswrapper[5119]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 20 00:10:39 crc kubenswrapper[5119]: Feb 20 00:10:39 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:33.895054108 +0000 UTC m=+15.874018460,LastTimestamp:2026-02-20 00:10:33.900924706 +0000 UTC m=+15.879889018,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:10:39 crc kubenswrapper[5119]: > Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.540102 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbe9e6830538\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe9e6830538 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:33.895322936 +0000 UTC m=+15.874287268,LastTimestamp:2026-02-20 00:10:33.90109325 +0000 UTC m=+15.880057552,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.549973 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 20 00:10:39 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-apiserver-crc.1895cbeb14c2c5dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:58024->192.168.126.11:17697: read: connection reset by peer Feb 20 00:10:39 crc kubenswrapper[5119]: body: Feb 20 00:10:39 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:38.966220252 +0000 UTC m=+20.945184554,LastTimestamp:2026-02-20 00:10:38.966220252 +0000 UTC m=+20.945184554,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:10:39 crc kubenswrapper[5119]: > Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.554115 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbeb14c49d7a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58024->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:38.966340986 +0000 UTC m=+20.945305288,LastTimestamp:2026-02-20 00:10:38.966340986 +0000 UTC m=+20.945305288,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.566072 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 20 00:10:39 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-apiserver-crc.1895cbeb14cb0edb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:36424->192.168.126.11:17697: read: connection reset by peer Feb 20 00:10:39 crc kubenswrapper[5119]: body: Feb 20 00:10:39 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:38.966763227 +0000 UTC m=+20.945727549,LastTimestamp:2026-02-20 00:10:38.966763227 +0000 UTC m=+20.945727549,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:10:39 crc kubenswrapper[5119]: > Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.574298 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbeb14cc83a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36424->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:38.966858659 +0000 UTC m=+20.945822991,LastTimestamp:2026-02-20 00:10:38.966858659 +0000 UTC m=+20.945822991,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.580220 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 20 00:10:39 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-apiserver-crc.1895cbeb14d3cc6a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 20 00:10:39 crc kubenswrapper[5119]: body: Feb 20 00:10:39 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:38.967336042 +0000 UTC m=+20.946335054,LastTimestamp:2026-02-20 00:10:38.967336042 +0000 UTC m=+20.946335054,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:10:39 crc kubenswrapper[5119]: > Feb 20 00:10:39 crc kubenswrapper[5119]: E0220 00:10:39.585303 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbeb14d50c80 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:38.967417984 +0000 UTC m=+20.946382306,LastTimestamp:2026-02-20 00:10:38.967417984 +0000 UTC m=+20.946382306,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:39 crc kubenswrapper[5119]: I0220 00:10:39.728629 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.013148 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.018612 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b542f2ecea8d7e33394975fb1b2ee41c1eb356fdab06e5b5584d6f661c56c30e" exitCode=255 Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.018723 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b542f2ecea8d7e33394975fb1b2ee41c1eb356fdab06e5b5584d6f661c56c30e"} Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.019009 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.019635 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.019678 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.019690 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:40 crc kubenswrapper[5119]: E0220 00:10:40.020084 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.020376 5119 scope.go:117] "RemoveContainer" containerID="b542f2ecea8d7e33394975fb1b2ee41c1eb356fdab06e5b5584d6f661c56c30e" Feb 20 00:10:40 crc kubenswrapper[5119]: E0220 00:10:40.028695 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbe72218ac1d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe72218ac1d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.010084381 +0000 UTC m=+3.989048673,LastTimestamp:2026-02-20 00:10:40.0221018 +0000 UTC m=+22.001066092,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:40 crc kubenswrapper[5119]: E0220 00:10:40.247389 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbe72f3e3528\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe72f3e3528 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.230648104 +0000 UTC m=+4.209612396,LastTimestamp:2026-02-20 00:10:40.241031194 +0000 UTC m=+22.219995486,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:40 crc kubenswrapper[5119]: E0220 00:10:40.256155 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbe730035297\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe730035297 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.243566231 +0000 UTC m=+4.222530523,LastTimestamp:2026-02-20 00:10:40.2505337 +0000 UTC m=+22.229497992,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.729866 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.893088 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.893407 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.894429 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.894479 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.894522 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:40 crc kubenswrapper[5119]: E0220 00:10:40.894959 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:40 crc kubenswrapper[5119]: I0220 00:10:40.897948 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:41 crc kubenswrapper[5119]: I0220 00:10:41.024629 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 20 00:10:41 crc kubenswrapper[5119]: I0220 00:10:41.026873 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"583048b4def2d0815306333b8b7eb58d88179391e2f4fb8baf25a139105cf5b1"} Feb 20 00:10:41 crc kubenswrapper[5119]: I0220 00:10:41.027064 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:41 crc kubenswrapper[5119]: I0220 00:10:41.027285 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:41 crc kubenswrapper[5119]: I0220 00:10:41.027746 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:41 crc kubenswrapper[5119]: I0220 00:10:41.027798 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:41 crc kubenswrapper[5119]: I0220 00:10:41.027812 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:41 crc kubenswrapper[5119]: E0220 00:10:41.028365 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:41 crc kubenswrapper[5119]: I0220 00:10:41.029273 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:41 crc kubenswrapper[5119]: I0220 00:10:41.029312 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:41 crc kubenswrapper[5119]: I0220 00:10:41.029323 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:41 crc kubenswrapper[5119]: E0220 00:10:41.029767 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:41 crc kubenswrapper[5119]: E0220 00:10:41.377031 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:10:41 crc kubenswrapper[5119]: I0220 00:10:41.740294 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:42 crc kubenswrapper[5119]: I0220 00:10:42.728982 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:43 crc kubenswrapper[5119]: I0220 00:10:43.034514 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 20 00:10:43 crc kubenswrapper[5119]: I0220 00:10:43.035235 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 20 00:10:43 crc kubenswrapper[5119]: I0220 00:10:43.037525 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="583048b4def2d0815306333b8b7eb58d88179391e2f4fb8baf25a139105cf5b1" exitCode=255 Feb 20 00:10:43 crc kubenswrapper[5119]: I0220 00:10:43.037580 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"583048b4def2d0815306333b8b7eb58d88179391e2f4fb8baf25a139105cf5b1"} Feb 20 00:10:43 crc kubenswrapper[5119]: I0220 00:10:43.037728 5119 scope.go:117] "RemoveContainer" containerID="b542f2ecea8d7e33394975fb1b2ee41c1eb356fdab06e5b5584d6f661c56c30e" Feb 20 00:10:43 crc kubenswrapper[5119]: I0220 00:10:43.038167 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:43 crc kubenswrapper[5119]: I0220 00:10:43.039062 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:43 crc kubenswrapper[5119]: I0220 00:10:43.039114 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:43 crc kubenswrapper[5119]: I0220 00:10:43.039131 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:43 crc kubenswrapper[5119]: E0220 00:10:43.039520 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:43 crc kubenswrapper[5119]: I0220 00:10:43.039933 5119 scope.go:117] "RemoveContainer" containerID="583048b4def2d0815306333b8b7eb58d88179391e2f4fb8baf25a139105cf5b1" Feb 20 00:10:43 crc kubenswrapper[5119]: E0220 00:10:43.040210 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:10:43 crc kubenswrapper[5119]: E0220 00:10:43.049765 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbec07964fd8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:43.040169944 +0000 UTC m=+25.019134246,LastTimestamp:2026-02-20 00:10:43.040169944 +0000 UTC m=+25.019134246,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:43 crc kubenswrapper[5119]: I0220 00:10:43.730663 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:44 crc kubenswrapper[5119]: I0220 00:10:44.042148 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 20 00:10:44 crc kubenswrapper[5119]: I0220 00:10:44.727103 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:45 crc kubenswrapper[5119]: I0220 00:10:45.314928 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:45 crc kubenswrapper[5119]: I0220 00:10:45.316464 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:45 crc kubenswrapper[5119]: I0220 00:10:45.316532 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:45 crc kubenswrapper[5119]: I0220 00:10:45.316601 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:45 crc kubenswrapper[5119]: I0220 00:10:45.316652 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:10:45 crc kubenswrapper[5119]: E0220 00:10:45.332397 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:10:45 crc kubenswrapper[5119]: E0220 00:10:45.528289 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:10:45 crc kubenswrapper[5119]: I0220 00:10:45.732861 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:46 crc kubenswrapper[5119]: E0220 00:10:46.325499 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:10:46 crc kubenswrapper[5119]: I0220 00:10:46.730755 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:47 crc kubenswrapper[5119]: I0220 00:10:47.731877 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:48 crc kubenswrapper[5119]: E0220 00:10:48.383761 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:10:48 crc kubenswrapper[5119]: I0220 00:10:48.732156 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:48 crc kubenswrapper[5119]: E0220 00:10:48.931604 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:10:48 crc kubenswrapper[5119]: I0220 00:10:48.946248 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:48 crc kubenswrapper[5119]: I0220 00:10:48.946694 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:48 crc kubenswrapper[5119]: I0220 00:10:48.947993 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:48 crc kubenswrapper[5119]: I0220 00:10:48.948078 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:48 crc kubenswrapper[5119]: I0220 00:10:48.948102 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:48 crc kubenswrapper[5119]: E0220 00:10:48.948880 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:48 crc kubenswrapper[5119]: I0220 00:10:48.949372 5119 scope.go:117] "RemoveContainer" containerID="583048b4def2d0815306333b8b7eb58d88179391e2f4fb8baf25a139105cf5b1" Feb 20 00:10:48 crc kubenswrapper[5119]: E0220 00:10:48.949772 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:10:48 crc kubenswrapper[5119]: E0220 00:10:48.958095 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbec07964fd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbec07964fd8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:43.040169944 +0000 UTC m=+25.019134246,LastTimestamp:2026-02-20 00:10:48.94971407 +0000 UTC m=+30.928678392,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:49 crc kubenswrapper[5119]: E0220 00:10:49.013115 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:10:49 crc kubenswrapper[5119]: I0220 00:10:49.732407 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:50 crc kubenswrapper[5119]: E0220 00:10:50.551959 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:10:50 crc kubenswrapper[5119]: I0220 00:10:50.730527 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:51 crc kubenswrapper[5119]: I0220 00:10:51.028283 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:51 crc kubenswrapper[5119]: I0220 00:10:51.028592 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:51 crc kubenswrapper[5119]: I0220 00:10:51.029743 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:51 crc kubenswrapper[5119]: I0220 00:10:51.029807 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:51 crc kubenswrapper[5119]: I0220 00:10:51.029827 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:51 crc kubenswrapper[5119]: E0220 00:10:51.030690 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:10:51 crc kubenswrapper[5119]: I0220 00:10:51.031090 5119 scope.go:117] "RemoveContainer" containerID="583048b4def2d0815306333b8b7eb58d88179391e2f4fb8baf25a139105cf5b1" Feb 20 00:10:51 crc kubenswrapper[5119]: E0220 00:10:51.031409 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:10:51 crc kubenswrapper[5119]: E0220 00:10:51.038472 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbec07964fd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbec07964fd8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:43.040169944 +0000 UTC m=+25.019134246,LastTimestamp:2026-02-20 00:10:51.031362708 +0000 UTC m=+33.010327010,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:10:51 crc kubenswrapper[5119]: I0220 00:10:51.731860 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:52 crc kubenswrapper[5119]: I0220 00:10:52.333583 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:52 crc kubenswrapper[5119]: I0220 00:10:52.334923 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:52 crc kubenswrapper[5119]: I0220 00:10:52.334983 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:52 crc kubenswrapper[5119]: I0220 00:10:52.335004 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:52 crc kubenswrapper[5119]: I0220 00:10:52.335040 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:10:52 crc kubenswrapper[5119]: E0220 00:10:52.346931 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:10:52 crc kubenswrapper[5119]: I0220 00:10:52.733434 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:53 crc kubenswrapper[5119]: I0220 00:10:53.728483 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:54 crc kubenswrapper[5119]: I0220 00:10:54.731765 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:55 crc kubenswrapper[5119]: E0220 00:10:55.391618 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:10:55 crc kubenswrapper[5119]: I0220 00:10:55.731591 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:56 crc kubenswrapper[5119]: I0220 00:10:56.732141 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:57 crc kubenswrapper[5119]: I0220 00:10:57.731404 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:58 crc kubenswrapper[5119]: I0220 00:10:58.731363 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:10:58 crc kubenswrapper[5119]: E0220 00:10:58.932854 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:10:59 crc kubenswrapper[5119]: I0220 00:10:59.347115 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:10:59 crc kubenswrapper[5119]: I0220 00:10:59.348255 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:59 crc kubenswrapper[5119]: I0220 00:10:59.348329 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:59 crc kubenswrapper[5119]: I0220 00:10:59.348363 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:59 crc kubenswrapper[5119]: I0220 00:10:59.348393 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:10:59 crc kubenswrapper[5119]: E0220 00:10:59.360041 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:10:59 crc kubenswrapper[5119]: I0220 00:10:59.729093 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:00 crc kubenswrapper[5119]: I0220 00:11:00.729140 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:01 crc kubenswrapper[5119]: E0220 00:11:01.032954 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:11:01 crc kubenswrapper[5119]: I0220 00:11:01.731137 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:02 crc kubenswrapper[5119]: E0220 00:11:02.399921 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:11:02 crc kubenswrapper[5119]: I0220 00:11:02.731930 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:03 crc kubenswrapper[5119]: I0220 00:11:03.730082 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:04 crc kubenswrapper[5119]: E0220 00:11:04.200433 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:11:04 crc kubenswrapper[5119]: I0220 00:11:04.732682 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:05 crc kubenswrapper[5119]: I0220 00:11:05.725958 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:05 crc kubenswrapper[5119]: I0220 00:11:05.855633 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:05 crc kubenswrapper[5119]: I0220 00:11:05.857115 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:05 crc kubenswrapper[5119]: I0220 00:11:05.857234 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:05 crc kubenswrapper[5119]: I0220 00:11:05.857329 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:05 crc kubenswrapper[5119]: E0220 00:11:05.857744 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:05 crc kubenswrapper[5119]: I0220 00:11:05.858113 5119 scope.go:117] "RemoveContainer" containerID="583048b4def2d0815306333b8b7eb58d88179391e2f4fb8baf25a139105cf5b1" Feb 20 00:11:05 crc kubenswrapper[5119]: E0220 00:11:05.868022 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbe72218ac1d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe72218ac1d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.010084381 +0000 UTC m=+3.989048673,LastTimestamp:2026-02-20 00:11:05.85975553 +0000 UTC m=+47.838719822,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:11:06 crc kubenswrapper[5119]: E0220 00:11:06.118688 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbe72f3e3528\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe72f3e3528 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.230648104 +0000 UTC m=+4.209612396,LastTimestamp:2026-02-20 00:11:06.111837605 +0000 UTC m=+48.090801937,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.120913 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 20 00:11:06 crc kubenswrapper[5119]: E0220 00:11:06.135142 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbe730035297\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbe730035297 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:22.243566231 +0000 UTC m=+4.222530523,LastTimestamp:2026-02-20 00:11:06.127062524 +0000 UTC m=+48.106026856,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.277351 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.277721 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.279629 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.279704 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.279725 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:06 crc kubenswrapper[5119]: E0220 00:11:06.280360 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.360276 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.361670 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.361728 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.361747 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.361799 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:11:06 crc kubenswrapper[5119]: E0220 00:11:06.381463 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:11:06 crc kubenswrapper[5119]: I0220 00:11:06.732133 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:07 crc kubenswrapper[5119]: I0220 00:11:07.129252 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 20 00:11:07 crc kubenswrapper[5119]: I0220 00:11:07.131754 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"47fd1138696a89da774ff48059fa2e14f374411cb6e94904df8086b651cb5617"} Feb 20 00:11:07 crc kubenswrapper[5119]: I0220 00:11:07.132035 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:07 crc kubenswrapper[5119]: I0220 00:11:07.132657 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:07 crc kubenswrapper[5119]: I0220 00:11:07.132705 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:07 crc kubenswrapper[5119]: I0220 00:11:07.132742 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:07 crc kubenswrapper[5119]: E0220 00:11:07.133226 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:07 crc kubenswrapper[5119]: I0220 00:11:07.731278 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.138836 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.141309 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.145021 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="47fd1138696a89da774ff48059fa2e14f374411cb6e94904df8086b651cb5617" exitCode=255 Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.145104 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"47fd1138696a89da774ff48059fa2e14f374411cb6e94904df8086b651cb5617"} Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.145171 5119 scope.go:117] "RemoveContainer" containerID="583048b4def2d0815306333b8b7eb58d88179391e2f4fb8baf25a139105cf5b1" Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.145470 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.146318 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.146378 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.146400 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:08 crc kubenswrapper[5119]: E0220 00:11:08.147172 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.147657 5119 scope.go:117] "RemoveContainer" containerID="47fd1138696a89da774ff48059fa2e14f374411cb6e94904df8086b651cb5617" Feb 20 00:11:08 crc kubenswrapper[5119]: E0220 00:11:08.148078 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:11:08 crc kubenswrapper[5119]: E0220 00:11:08.157987 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbec07964fd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbec07964fd8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:43.040169944 +0000 UTC m=+25.019134246,LastTimestamp:2026-02-20 00:11:08.14800473 +0000 UTC m=+50.126969052,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.729876 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:08 crc kubenswrapper[5119]: E0220 00:11:08.933230 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:11:08 crc kubenswrapper[5119]: I0220 00:11:08.945941 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:09 crc kubenswrapper[5119]: I0220 00:11:09.149905 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 20 00:11:09 crc kubenswrapper[5119]: I0220 00:11:09.152236 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:09 crc kubenswrapper[5119]: I0220 00:11:09.152997 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:09 crc kubenswrapper[5119]: I0220 00:11:09.153090 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:09 crc kubenswrapper[5119]: I0220 00:11:09.153120 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:09 crc kubenswrapper[5119]: E0220 00:11:09.154106 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:09 crc kubenswrapper[5119]: I0220 00:11:09.154618 5119 scope.go:117] "RemoveContainer" containerID="47fd1138696a89da774ff48059fa2e14f374411cb6e94904df8086b651cb5617" Feb 20 00:11:09 crc kubenswrapper[5119]: E0220 00:11:09.154951 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:11:09 crc kubenswrapper[5119]: E0220 00:11:09.163099 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbec07964fd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbec07964fd8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:43.040169944 +0000 UTC m=+25.019134246,LastTimestamp:2026-02-20 00:11:09.154910631 +0000 UTC m=+51.133874963,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:11:09 crc kubenswrapper[5119]: E0220 00:11:09.407118 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:11:09 crc kubenswrapper[5119]: I0220 00:11:09.727372 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:10 crc kubenswrapper[5119]: E0220 00:11:10.374485 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:11:10 crc kubenswrapper[5119]: I0220 00:11:10.732712 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:11 crc kubenswrapper[5119]: I0220 00:11:11.733450 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:11 crc kubenswrapper[5119]: E0220 00:11:11.734267 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:11:12 crc kubenswrapper[5119]: I0220 00:11:12.732609 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:13 crc kubenswrapper[5119]: I0220 00:11:13.381954 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:13 crc kubenswrapper[5119]: I0220 00:11:13.384207 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:13 crc kubenswrapper[5119]: I0220 00:11:13.384535 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:13 crc kubenswrapper[5119]: I0220 00:11:13.384800 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:13 crc kubenswrapper[5119]: I0220 00:11:13.385055 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:11:13 crc kubenswrapper[5119]: E0220 00:11:13.400372 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:11:13 crc kubenswrapper[5119]: I0220 00:11:13.732121 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:14 crc kubenswrapper[5119]: I0220 00:11:14.730367 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:15 crc kubenswrapper[5119]: I0220 00:11:15.731753 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:16 crc kubenswrapper[5119]: E0220 00:11:16.417320 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:11:16 crc kubenswrapper[5119]: I0220 00:11:16.731942 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:17 crc kubenswrapper[5119]: I0220 00:11:17.132619 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:17 crc kubenswrapper[5119]: I0220 00:11:17.133089 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:17 crc kubenswrapper[5119]: I0220 00:11:17.134488 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:17 crc kubenswrapper[5119]: I0220 00:11:17.134569 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:17 crc kubenswrapper[5119]: I0220 00:11:17.134585 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:17 crc kubenswrapper[5119]: E0220 00:11:17.135135 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:17 crc kubenswrapper[5119]: I0220 00:11:17.135532 5119 scope.go:117] "RemoveContainer" containerID="47fd1138696a89da774ff48059fa2e14f374411cb6e94904df8086b651cb5617" Feb 20 00:11:17 crc kubenswrapper[5119]: E0220 00:11:17.135816 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:11:17 crc kubenswrapper[5119]: E0220 00:11:17.142474 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbec07964fd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbec07964fd8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:10:43.040169944 +0000 UTC m=+25.019134246,LastTimestamp:2026-02-20 00:11:17.135771055 +0000 UTC m=+59.114735347,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:11:17 crc kubenswrapper[5119]: I0220 00:11:17.732631 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:18 crc kubenswrapper[5119]: I0220 00:11:18.730964 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:18 crc kubenswrapper[5119]: E0220 00:11:18.934663 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:11:19 crc kubenswrapper[5119]: I0220 00:11:19.729327 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:20 crc kubenswrapper[5119]: I0220 00:11:20.400754 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:20 crc kubenswrapper[5119]: I0220 00:11:20.402113 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:20 crc kubenswrapper[5119]: I0220 00:11:20.402173 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:20 crc kubenswrapper[5119]: I0220 00:11:20.402193 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:20 crc kubenswrapper[5119]: I0220 00:11:20.402232 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:11:20 crc kubenswrapper[5119]: E0220 00:11:20.418486 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:11:20 crc kubenswrapper[5119]: I0220 00:11:20.731714 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:21 crc kubenswrapper[5119]: I0220 00:11:21.731630 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:22 crc kubenswrapper[5119]: I0220 00:11:22.731130 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:23 crc kubenswrapper[5119]: E0220 00:11:23.427288 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:11:23 crc kubenswrapper[5119]: I0220 00:11:23.731993 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:11:23 crc kubenswrapper[5119]: I0220 00:11:23.905391 5119 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-8k6pr" Feb 20 00:11:23 crc kubenswrapper[5119]: I0220 00:11:23.919868 5119 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-8k6pr" Feb 20 00:11:23 crc kubenswrapper[5119]: I0220 00:11:23.945390 5119 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 20 00:11:24 crc kubenswrapper[5119]: I0220 00:11:24.542535 5119 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 20 00:11:24 crc kubenswrapper[5119]: I0220 00:11:24.922220 5119 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-22 00:06:23 +0000 UTC" deadline="2026-03-14 17:52:14.081148692 +0000 UTC" Feb 20 00:11:24 crc kubenswrapper[5119]: I0220 00:11:24.922298 5119 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="545h40m49.158857815s" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.419002 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.420520 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.420643 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.420674 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.420939 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.431838 5119 kubelet_node_status.go:127] "Node was previously registered" node="crc" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.432253 5119 kubelet_node_status.go:81] "Successfully registered node" node="crc" Feb 20 00:11:27 crc kubenswrapper[5119]: E0220 00:11:27.432290 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.438691 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.438742 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.438752 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.438771 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.438783 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:27Z","lastTransitionTime":"2026-02-20T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:27 crc kubenswrapper[5119]: E0220 00:11:27.453965 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"301425df-f98b-4e7d-a726-c87ed89cc7b9\\\",\\\"systemUUID\\\":\\\"0b11b3ff-8b58-4601-b700-d0d714919b4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.464148 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.464206 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.464217 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.464232 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.464246 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:27Z","lastTransitionTime":"2026-02-20T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:27 crc kubenswrapper[5119]: E0220 00:11:27.474423 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"301425df-f98b-4e7d-a726-c87ed89cc7b9\\\",\\\"systemUUID\\\":\\\"0b11b3ff-8b58-4601-b700-d0d714919b4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.478361 5119 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.478388 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.478503 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.478521 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.478559 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.478574 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:27Z","lastTransitionTime":"2026-02-20T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:27 crc kubenswrapper[5119]: E0220 00:11:27.490147 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"301425df-f98b-4e7d-a726-c87ed89cc7b9\\\",\\\"systemUUID\\\":\\\"0b11b3ff-8b58-4601-b700-d0d714919b4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.494452 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.494526 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.494610 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.494643 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:27 crc kubenswrapper[5119]: I0220 00:11:27.494667 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:27Z","lastTransitionTime":"2026-02-20T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:27 crc kubenswrapper[5119]: E0220 00:11:27.505991 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"301425df-f98b-4e7d-a726-c87ed89cc7b9\\\",\\\"systemUUID\\\":\\\"0b11b3ff-8b58-4601-b700-d0d714919b4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:27 crc kubenswrapper[5119]: E0220 00:11:27.506120 5119 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 20 00:11:27 crc kubenswrapper[5119]: E0220 00:11:27.506157 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:27 crc kubenswrapper[5119]: E0220 00:11:27.606616 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:27 crc kubenswrapper[5119]: E0220 00:11:27.706808 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:27 crc kubenswrapper[5119]: E0220 00:11:27.807882 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:27 crc kubenswrapper[5119]: E0220 00:11:27.908261 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:28 crc kubenswrapper[5119]: E0220 00:11:28.008781 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:28 crc kubenswrapper[5119]: E0220 00:11:28.109502 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:28 crc kubenswrapper[5119]: E0220 00:11:28.209648 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:28 crc kubenswrapper[5119]: E0220 00:11:28.310756 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:28 crc kubenswrapper[5119]: E0220 00:11:28.412062 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:28 crc kubenswrapper[5119]: E0220 00:11:28.512634 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:28 crc kubenswrapper[5119]: E0220 00:11:28.613804 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:28 crc kubenswrapper[5119]: E0220 00:11:28.714928 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:28 crc kubenswrapper[5119]: E0220 00:11:28.815277 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:28 crc kubenswrapper[5119]: E0220 00:11:28.915570 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:28 crc kubenswrapper[5119]: E0220 00:11:28.935107 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:11:29 crc kubenswrapper[5119]: E0220 00:11:29.016700 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:29 crc kubenswrapper[5119]: E0220 00:11:29.116955 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:29 crc kubenswrapper[5119]: E0220 00:11:29.217995 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:29 crc kubenswrapper[5119]: E0220 00:11:29.318430 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:29 crc kubenswrapper[5119]: E0220 00:11:29.419048 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:29 crc kubenswrapper[5119]: E0220 00:11:29.519688 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:29 crc kubenswrapper[5119]: E0220 00:11:29.620224 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:29 crc kubenswrapper[5119]: E0220 00:11:29.721099 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:29 crc kubenswrapper[5119]: E0220 00:11:29.821695 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:29 crc kubenswrapper[5119]: E0220 00:11:29.922098 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:30 crc kubenswrapper[5119]: E0220 00:11:30.023268 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:30 crc kubenswrapper[5119]: E0220 00:11:30.123899 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:30 crc kubenswrapper[5119]: E0220 00:11:30.224965 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:30 crc kubenswrapper[5119]: E0220 00:11:30.325804 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:30 crc kubenswrapper[5119]: E0220 00:11:30.426702 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:30 crc kubenswrapper[5119]: E0220 00:11:30.527587 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:30 crc kubenswrapper[5119]: E0220 00:11:30.628777 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:30 crc kubenswrapper[5119]: E0220 00:11:30.729139 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:30 crc kubenswrapper[5119]: E0220 00:11:30.830274 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:30 crc kubenswrapper[5119]: I0220 00:11:30.855860 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:30 crc kubenswrapper[5119]: I0220 00:11:30.857207 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:30 crc kubenswrapper[5119]: I0220 00:11:30.857268 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:30 crc kubenswrapper[5119]: I0220 00:11:30.857287 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:30 crc kubenswrapper[5119]: E0220 00:11:30.858116 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:30 crc kubenswrapper[5119]: I0220 00:11:30.858566 5119 scope.go:117] "RemoveContainer" containerID="47fd1138696a89da774ff48059fa2e14f374411cb6e94904df8086b651cb5617" Feb 20 00:11:30 crc kubenswrapper[5119]: E0220 00:11:30.931003 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:31 crc kubenswrapper[5119]: E0220 00:11:31.031202 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:31 crc kubenswrapper[5119]: E0220 00:11:31.131925 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:31 crc kubenswrapper[5119]: I0220 00:11:31.221753 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 20 00:11:31 crc kubenswrapper[5119]: I0220 00:11:31.223685 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf"} Feb 20 00:11:31 crc kubenswrapper[5119]: I0220 00:11:31.223952 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:31 crc kubenswrapper[5119]: I0220 00:11:31.224685 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:31 crc kubenswrapper[5119]: I0220 00:11:31.224743 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:31 crc kubenswrapper[5119]: I0220 00:11:31.224762 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:31 crc kubenswrapper[5119]: E0220 00:11:31.225686 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:31 crc kubenswrapper[5119]: E0220 00:11:31.232737 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:31 crc kubenswrapper[5119]: E0220 00:11:31.332887 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:31 crc kubenswrapper[5119]: E0220 00:11:31.433316 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:31 crc kubenswrapper[5119]: E0220 00:11:31.533665 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:31 crc kubenswrapper[5119]: E0220 00:11:31.634626 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:31 crc kubenswrapper[5119]: E0220 00:11:31.734771 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:31 crc kubenswrapper[5119]: E0220 00:11:31.835731 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:31 crc kubenswrapper[5119]: E0220 00:11:31.936708 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:32 crc kubenswrapper[5119]: E0220 00:11:32.037660 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:32 crc kubenswrapper[5119]: E0220 00:11:32.138298 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:32 crc kubenswrapper[5119]: E0220 00:11:32.239056 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:32 crc kubenswrapper[5119]: E0220 00:11:32.339576 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:32 crc kubenswrapper[5119]: E0220 00:11:32.440296 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:32 crc kubenswrapper[5119]: E0220 00:11:32.541462 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:32 crc kubenswrapper[5119]: E0220 00:11:32.642189 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:32 crc kubenswrapper[5119]: E0220 00:11:32.743085 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:32 crc kubenswrapper[5119]: E0220 00:11:32.843915 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:32 crc kubenswrapper[5119]: E0220 00:11:32.944890 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.045608 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.146441 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:33 crc kubenswrapper[5119]: I0220 00:11:33.231437 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 20 00:11:33 crc kubenswrapper[5119]: I0220 00:11:33.232214 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 20 00:11:33 crc kubenswrapper[5119]: I0220 00:11:33.234187 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf" exitCode=255 Feb 20 00:11:33 crc kubenswrapper[5119]: I0220 00:11:33.234275 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf"} Feb 20 00:11:33 crc kubenswrapper[5119]: I0220 00:11:33.234351 5119 scope.go:117] "RemoveContainer" containerID="47fd1138696a89da774ff48059fa2e14f374411cb6e94904df8086b651cb5617" Feb 20 00:11:33 crc kubenswrapper[5119]: I0220 00:11:33.234828 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:33 crc kubenswrapper[5119]: I0220 00:11:33.242340 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:33 crc kubenswrapper[5119]: I0220 00:11:33.242384 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:33 crc kubenswrapper[5119]: I0220 00:11:33.242400 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.242997 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:33 crc kubenswrapper[5119]: I0220 00:11:33.243323 5119 scope.go:117] "RemoveContainer" containerID="f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.243636 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.247207 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.347622 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.448208 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.549334 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.650008 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.750339 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.851399 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:33 crc kubenswrapper[5119]: E0220 00:11:33.951855 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:34 crc kubenswrapper[5119]: E0220 00:11:34.052715 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:34 crc kubenswrapper[5119]: E0220 00:11:34.153768 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:34 crc kubenswrapper[5119]: I0220 00:11:34.240140 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 20 00:11:34 crc kubenswrapper[5119]: E0220 00:11:34.254617 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:34 crc kubenswrapper[5119]: E0220 00:11:34.355503 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:34 crc kubenswrapper[5119]: E0220 00:11:34.456290 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:34 crc kubenswrapper[5119]: E0220 00:11:34.557436 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:34 crc kubenswrapper[5119]: E0220 00:11:34.657951 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:34 crc kubenswrapper[5119]: E0220 00:11:34.758102 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:34 crc kubenswrapper[5119]: E0220 00:11:34.859101 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:34 crc kubenswrapper[5119]: E0220 00:11:34.959776 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:35 crc kubenswrapper[5119]: E0220 00:11:35.060366 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:35 crc kubenswrapper[5119]: E0220 00:11:35.161396 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:35 crc kubenswrapper[5119]: E0220 00:11:35.262213 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:35 crc kubenswrapper[5119]: E0220 00:11:35.362431 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:35 crc kubenswrapper[5119]: E0220 00:11:35.462765 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:35 crc kubenswrapper[5119]: E0220 00:11:35.563834 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:35 crc kubenswrapper[5119]: E0220 00:11:35.664431 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:35 crc kubenswrapper[5119]: E0220 00:11:35.765255 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:35 crc kubenswrapper[5119]: E0220 00:11:35.866352 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:35 crc kubenswrapper[5119]: E0220 00:11:35.966831 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:36 crc kubenswrapper[5119]: E0220 00:11:36.067901 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:36 crc kubenswrapper[5119]: E0220 00:11:36.169095 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:36 crc kubenswrapper[5119]: E0220 00:11:36.269311 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:36 crc kubenswrapper[5119]: E0220 00:11:36.370159 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:36 crc kubenswrapper[5119]: E0220 00:11:36.471017 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:36 crc kubenswrapper[5119]: E0220 00:11:36.571176 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:36 crc kubenswrapper[5119]: E0220 00:11:36.672800 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:36 crc kubenswrapper[5119]: E0220 00:11:36.773667 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:36 crc kubenswrapper[5119]: E0220 00:11:36.874337 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:36 crc kubenswrapper[5119]: E0220 00:11:36.975130 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.075712 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.176615 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.277205 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.378135 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.479086 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.579745 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.680368 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.682753 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.687701 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.687761 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.687780 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.687807 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.687828 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:37Z","lastTransitionTime":"2026-02-20T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.701837 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"301425df-f98b-4e7d-a726-c87ed89cc7b9\\\",\\\"systemUUID\\\":\\\"0b11b3ff-8b58-4601-b700-d0d714919b4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.706581 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.706673 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.706699 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.706733 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.706759 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:37Z","lastTransitionTime":"2026-02-20T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.722849 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"301425df-f98b-4e7d-a726-c87ed89cc7b9\\\",\\\"systemUUID\\\":\\\"0b11b3ff-8b58-4601-b700-d0d714919b4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.728161 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.728270 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.728291 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.728320 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.728349 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:37Z","lastTransitionTime":"2026-02-20T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.744177 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"301425df-f98b-4e7d-a726-c87ed89cc7b9\\\",\\\"systemUUID\\\":\\\"0b11b3ff-8b58-4601-b700-d0d714919b4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.749108 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.749203 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.749230 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.749268 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:37 crc kubenswrapper[5119]: I0220 00:11:37.749295 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:37Z","lastTransitionTime":"2026-02-20T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.764690 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"301425df-f98b-4e7d-a726-c87ed89cc7b9\\\",\\\"systemUUID\\\":\\\"0b11b3ff-8b58-4601-b700-d0d714919b4e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.764946 5119 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.780769 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.881737 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:37 crc kubenswrapper[5119]: E0220 00:11:37.982418 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.083255 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.183905 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.285080 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.385672 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.485790 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.586838 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.687367 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.788576 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:38 crc kubenswrapper[5119]: I0220 00:11:38.855934 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:38 crc kubenswrapper[5119]: I0220 00:11:38.857201 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:38 crc kubenswrapper[5119]: I0220 00:11:38.857482 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:38 crc kubenswrapper[5119]: I0220 00:11:38.857515 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.858162 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.889003 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.936324 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:11:38 crc kubenswrapper[5119]: I0220 00:11:38.946062 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:38 crc kubenswrapper[5119]: I0220 00:11:38.946647 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:38 crc kubenswrapper[5119]: I0220 00:11:38.947707 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:38 crc kubenswrapper[5119]: I0220 00:11:38.947773 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:38 crc kubenswrapper[5119]: I0220 00:11:38.947796 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.948962 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:38 crc kubenswrapper[5119]: I0220 00:11:38.949660 5119 scope.go:117] "RemoveContainer" containerID="f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.950206 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:11:38 crc kubenswrapper[5119]: E0220 00:11:38.989747 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:39 crc kubenswrapper[5119]: E0220 00:11:39.090325 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:39 crc kubenswrapper[5119]: E0220 00:11:39.190936 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:39 crc kubenswrapper[5119]: E0220 00:11:39.292007 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:39 crc kubenswrapper[5119]: E0220 00:11:39.392680 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:39 crc kubenswrapper[5119]: E0220 00:11:39.493068 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:39 crc kubenswrapper[5119]: E0220 00:11:39.594115 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:39 crc kubenswrapper[5119]: E0220 00:11:39.694320 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:39 crc kubenswrapper[5119]: E0220 00:11:39.794973 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:39 crc kubenswrapper[5119]: E0220 00:11:39.895581 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:39 crc kubenswrapper[5119]: E0220 00:11:39.996531 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:40 crc kubenswrapper[5119]: E0220 00:11:40.098035 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:40 crc kubenswrapper[5119]: E0220 00:11:40.198820 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:40 crc kubenswrapper[5119]: E0220 00:11:40.299328 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:40 crc kubenswrapper[5119]: E0220 00:11:40.400432 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:40 crc kubenswrapper[5119]: E0220 00:11:40.501027 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:40 crc kubenswrapper[5119]: E0220 00:11:40.601647 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:40 crc kubenswrapper[5119]: E0220 00:11:40.702886 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:40 crc kubenswrapper[5119]: E0220 00:11:40.802982 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:40 crc kubenswrapper[5119]: E0220 00:11:40.904177 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.005439 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.105825 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.206335 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.224992 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.225440 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.226751 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.226924 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.227095 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.228062 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.228780 5119 scope.go:117] "RemoveContainer" containerID="f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.229317 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.307317 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.408820 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.429229 5119 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.457280 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.475825 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.512315 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.512396 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.512423 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.512451 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.512476 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:41Z","lastTransitionTime":"2026-02-20T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.574523 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.615585 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.615651 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.615662 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.615685 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.615698 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:41Z","lastTransitionTime":"2026-02-20T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.675922 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.718120 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.718191 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.718213 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.718240 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.718259 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:41Z","lastTransitionTime":"2026-02-20T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.748993 5119 apiserver.go:52] "Watching apiserver" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.761047 5119 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.761630 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-2b6r7","openshift-kube-apiserver/kube-apiserver-crc","openshift-multus/multus-rlzxr","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj","openshift-ovn-kubernetes/ovnkube-node-m42rs","openshift-dns/node-resolver-2zwfl","openshift-machine-config-operator/machine-config-daemon-l7jjp","openshift-network-operator/iptables-alerter-5jnd7","openshift-multus/network-metrics-daemon-vnzx8","openshift-network-node-identity/network-node-identity-dgvkt","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-multus/multus-additional-cni-plugins-m8j8p","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5"] Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.763334 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.764292 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.764411 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.767196 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.767316 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.768055 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.768682 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.769737 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.770343 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.770423 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.770515 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.770845 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.772022 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.772274 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.772805 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.774317 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.774584 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.776512 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.776958 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.777811 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.782081 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.782371 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.782988 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.783940 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.789371 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.790236 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2zwfl" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.791853 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.792911 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.795279 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.796075 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.798982 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2b6r7" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.802723 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.803392 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.803726 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.804255 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.812576 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.817771 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.819618 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.819882 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.821296 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.824026 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.824175 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.824132 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.827679 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.828123 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.828206 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vnzx8" podUID="00a91a87-0ad1-4805-a686-42ea9dfa6bb9" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.828779 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.828839 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.828858 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.828902 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.828919 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:41Z","lastTransitionTime":"2026-02-20T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.832835 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.834871 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.835787 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.836324 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.838592 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.839334 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.840113 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.840413 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.840590 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.840615 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.841119 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.843207 5119 scope.go:117] "RemoveContainer" containerID="f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.843353 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.843651 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.844185 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.845394 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.846799 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.858902 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.861467 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862063 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-socket-dir-parent\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862197 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862242 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-netns\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862313 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-ovn-kubernetes\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862381 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862249 5119 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862420 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-cni-dir\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862484 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-cnibin\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862527 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862618 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/34a35a47-a06d-4444-9141-580ed7777c52-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862714 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-daemon-config\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.862857 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-run-multus-certs\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863064 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.863146 5119 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.863267 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:42.363229567 +0000 UTC m=+84.342193869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863266 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-os-release\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863331 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-openvswitch\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863371 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-os-release\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863407 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/34a35a47-a06d-4444-9141-580ed7777c52-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863442 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-hostroot\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863478 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863514 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863595 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-run-k8s-cni-cncf-io\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863632 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863663 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfpqm\" (UniqueName: \"kubernetes.io/projected/3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7-kube-api-access-pfpqm\") pod \"node-resolver-2zwfl\" (UID: \"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7\") " pod="openshift-dns/node-resolver-2zwfl" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863698 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-node-log\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863739 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-bin\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863787 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-netd\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863814 5119 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863840 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-run-netns\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863887 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-proxy-tls\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863938 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-mcd-auth-proxy-config\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.863995 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864041 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7-tmp-dir\") pod \"node-resolver-2zwfl\" (UID: \"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7\") " pod="openshift-dns/node-resolver-2zwfl" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864091 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864141 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-cnibin\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864195 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34a35a47-a06d-4444-9141-580ed7777c52-cni-binary-copy\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864245 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wn84\" (UniqueName: \"kubernetes.io/projected/a9bc4f8d-447f-4dd2-a865-6fd066513b13-kube-api-access-4wn84\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864306 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7-hosts-file\") pod \"node-resolver-2zwfl\" (UID: \"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7\") " pod="openshift-dns/node-resolver-2zwfl" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864351 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-slash\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864402 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-systemd\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864446 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-var-lib-kubelet\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864499 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-etc-kubernetes\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864575 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c57hk\" (UniqueName: \"kubernetes.io/projected/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-kube-api-access-c57hk\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864654 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-kubelet\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864710 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-config\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864768 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864934 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.864998 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.865093 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.865140 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-systemd-units\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.865172 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-script-lib\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.865206 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-966tq\" (UniqueName: \"kubernetes.io/projected/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-kube-api-access-966tq\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.865266 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-system-cni-dir\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.865208 5119 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.865332 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-ovn\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.865363 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-env-overrides\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.865395 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-var-lib-cni-bin\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.865432 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4lc9\" (UniqueName: \"kubernetes.io/projected/e24ea4b0-1a34-4fb3-b40c-684c03795e07-kube-api-access-v4lc9\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.865875 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:42.365840248 +0000 UTC m=+84.344804570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.865897 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.866127 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-var-lib-openvswitch\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.866191 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-etc-openvswitch\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.866295 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovn-node-metrics-cert\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.866397 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-system-cni-dir\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.866474 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj84b\" (UniqueName: \"kubernetes.io/projected/34a35a47-a06d-4444-9141-580ed7777c52-kube-api-access-jj84b\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.866591 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e24ea4b0-1a34-4fb3-b40c-684c03795e07-cni-binary-copy\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.866633 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-log-socket\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.866709 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-var-lib-cni-multus\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.866745 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-conf-dir\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.866837 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-rootfs\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.868620 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.868969 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.874245 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.874409 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.876353 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.882211 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.883024 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.892955 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.893006 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.893029 5119 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.893421 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:42.393318826 +0000 UTC m=+84.372283148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.897822 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.897861 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.897875 5119 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:41 crc kubenswrapper[5119]: E0220 00:11:41.897956 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:42.39793287 +0000 UTC m=+84.376897162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.899106 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.915523 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.924861 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c57hk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c57hk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:11:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7jjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.931240 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.931305 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.931330 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.931359 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.931379 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:41Z","lastTransitionTime":"2026-02-20T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.934865 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2zwfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:11:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2zwfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.942712 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2b6r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ebd4256-9121-496b-856b-910c268419c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zj2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:11:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2b6r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.961720 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-966tq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-966tq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-966tq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-966tq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-966tq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-966tq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-966tq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-966tq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-966tq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:11:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m42rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.968359 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.968437 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.968480 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.968528 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.968600 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.968650 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.968760 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.969514 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.969611 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.969616 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.969830 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.969912 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.969963 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.969973 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970102 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970148 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970182 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970210 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970238 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970264 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970292 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970322 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970349 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970378 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970260 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970399 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970774 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970797 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.970515 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971033 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971082 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971123 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971159 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971197 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971236 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971273 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971317 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971303 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971356 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971391 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971425 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971460 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971493 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.969533 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971527 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971634 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971691 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971705 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971727 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971739 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971757 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971788 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971779 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971818 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971931 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.971993 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972055 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972124 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972179 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972238 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972417 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972473 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972523 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972479 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972659 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972713 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972762 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972812 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972863 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972911 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973365 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vnzx8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc7zs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fc7zs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:11:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vnzx8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973510 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973603 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973656 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973703 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973894 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973963 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974016 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974073 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974439 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974513 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974602 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974990 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975066 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975106 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975142 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975192 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975231 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975271 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975309 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975351 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975396 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975434 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975472 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975509 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975588 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975631 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975683 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975720 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975755 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975793 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975832 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975868 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975918 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975957 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975996 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976039 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976074 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976110 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976156 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976203 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976249 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976304 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976365 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976434 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976496 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976582 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976648 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976706 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976777 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976843 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976887 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976924 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976972 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.972928 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973226 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973360 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973379 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973601 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973649 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.973691 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974146 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.977287 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974223 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974268 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974295 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974699 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.974746 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.975940 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976045 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976340 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.976667 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.978102 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.978231 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.978245 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.978718 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.978746 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.978582 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.978777 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979182 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979252 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979305 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979430 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979482 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979663 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979719 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979761 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979814 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979831 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979871 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979912 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979955 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.979994 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980035 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980079 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980129 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980168 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980223 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980283 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980343 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980398 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980440 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980477 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980514 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980598 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980663 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980721 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980790 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980855 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980909 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.980972 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.981077 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.981128 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.981167 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.983737 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.983786 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.983835 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.983876 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.983921 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.983966 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.984011 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.984289 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.984340 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.985269 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.985353 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.985307 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.985359 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.985327 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.985875 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.985958 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.986020 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.986078 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.986128 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.986171 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.986175 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.986381 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.986996 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987190 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987330 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987405 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987429 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987461 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987505 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987595 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987574 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987684 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987717 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987895 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987934 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.987895 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.988380 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.988812 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.988880 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.988902 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.988930 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.988973 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989012 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989156 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989462 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989499 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989567 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989619 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989671 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989705 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989737 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989767 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989800 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989827 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989855 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.991416 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-rlzxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24ea4b0-1a34-4fb3-b40c-684c03795e07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4lc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:11:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rlzxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.991844 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.991887 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.992359 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.992455 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.992506 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.992850 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994189 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.989702 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.990084 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.990271 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.991754 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.991518 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.991832 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.992241 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.992426 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.992509 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.992520 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.992820 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.992858 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.992911 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.993034 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.993343 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.993921 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994509 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994036 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994585 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994632 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994669 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994713 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994730 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994755 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994797 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994835 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994894 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.994974 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995024 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995022 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995067 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995102 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995147 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995195 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995241 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995285 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995327 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995369 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995409 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995450 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995492 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995530 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995643 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995685 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995728 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995789 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.995841 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.996203 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.996245 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.998306 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.998374 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.998421 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.999433 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.999493 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.999537 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.999621 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.999665 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.999713 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.999766 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.999817 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 20 00:11:41 crc kubenswrapper[5119]: I0220 00:11:41.999860 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.999973 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-bin\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000027 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-netd\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000066 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-run-netns\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000105 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-proxy-tls\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000147 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-mcd-auth-proxy-config\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000196 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000247 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7-tmp-dir\") pod \"node-resolver-2zwfl\" (UID: \"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7\") " pod="openshift-dns/node-resolver-2zwfl" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000337 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-cnibin\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000407 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34a35a47-a06d-4444-9141-580ed7777c52-cni-binary-copy\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000484 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4wn84\" (UniqueName: \"kubernetes.io/projected/a9bc4f8d-447f-4dd2-a865-6fd066513b13-kube-api-access-4wn84\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000533 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000614 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7-hosts-file\") pod \"node-resolver-2zwfl\" (UID: \"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7\") " pod="openshift-dns/node-resolver-2zwfl" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000831 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7-hosts-file\") pod \"node-resolver-2zwfl\" (UID: \"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7\") " pod="openshift-dns/node-resolver-2zwfl" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000909 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-bin\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.000962 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-netd\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.001011 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-run-netns\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.001381 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-slash\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.001475 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-systemd\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.001533 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-var-lib-kubelet\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.001616 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-etc-kubernetes\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.001702 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c57hk\" (UniqueName: \"kubernetes.io/projected/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-kube-api-access-c57hk\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.001787 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.001882 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zj2x\" (UniqueName: \"kubernetes.io/projected/7ebd4256-9121-496b-856b-910c268419c6-kube-api-access-2zj2x\") pod \"node-ca-2b6r7\" (UID: \"7ebd4256-9121-496b-856b-910c268419c6\") " pod="openshift-image-registry/node-ca-2b6r7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.001959 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.002038 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-kubelet\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.002112 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-config\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.002225 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.002889 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.002987 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-systemd-units\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.004103 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-script-lib\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.004217 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.004259 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.004596 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-systemd-units\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.004906 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-slash\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.005134 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-systemd\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.005434 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-var-lib-kubelet\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.005446 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7-tmp-dir\") pod \"node-resolver-2zwfl\" (UID: \"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7\") " pod="openshift-dns/node-resolver-2zwfl" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.995726 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.995941 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.996218 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.996310 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.996317 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.996686 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.997238 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.997414 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.997500 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.997870 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.998568 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.998774 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.998982 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.998932 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.999012 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.999048 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.999051 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:41.999144 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.004099 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:11:42.504056953 +0000 UTC m=+84.483021265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.006322 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-966tq\" (UniqueName: \"kubernetes.io/projected/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-kube-api-access-966tq\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.006373 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-system-cni-dir\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.006483 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-ovn\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.006534 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-env-overrides\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.006590 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-var-lib-cni-bin\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.007464 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-mcd-auth-proxy-config\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.007695 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.007782 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.007819 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.007874 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-config\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008041 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008211 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v4lc9\" (UniqueName: \"kubernetes.io/projected/e24ea4b0-1a34-4fb3-b40c-684c03795e07-kube-api-access-v4lc9\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008264 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008361 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008451 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-kubelet\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008530 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-system-cni-dir\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008609 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-cnibin\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008672 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-var-lib-openvswitch\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008678 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008716 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-etc-openvswitch\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008757 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovn-node-metrics-cert\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.008755 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.009101 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.009295 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.009360 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.009402 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.009464 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-ovn\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.010047 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-system-cni-dir\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.010156 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-var-lib-cni-bin\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.010330 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-etc-openvswitch\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.010487 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jj84b\" (UniqueName: \"kubernetes.io/projected/34a35a47-a06d-4444-9141-580ed7777c52-kube-api-access-jj84b\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.010570 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e24ea4b0-1a34-4fb3-b40c-684c03795e07-cni-binary-copy\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.010795 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.010946 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-log-socket\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.011014 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-env-overrides\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.011056 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-var-lib-cni-multus\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.011106 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-conf-dir\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.011152 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-var-lib-openvswitch\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.011230 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-var-lib-cni-multus\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.011265 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-system-cni-dir\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.011461 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-rootfs\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.011651 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc7zs\" (UniqueName: \"kubernetes.io/projected/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-kube-api-access-fc7zs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.011749 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-rootfs\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.011855 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.012270 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e24ea4b0-1a34-4fb3-b40c-684c03795e07-cni-binary-copy\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.012343 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-socket-dir-parent\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.012403 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-log-socket\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.012416 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.012481 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-socket-dir-parent\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.012517 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.005616 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-etc-kubernetes\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.012759 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-netns\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.012940 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-ovn-kubernetes\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.012960 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-conf-dir\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.012973 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-netns\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.013005 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.013229 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.013439 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-cni-dir\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.013644 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-ovn-kubernetes\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.010210 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a71b238f-114f-4e68-a564-732cd30a1fee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-20T00:11:33Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0220 00:11:31.957633 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0220 00:11:31.957767 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0220 00:11:31.958753 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2799090538/tls.crt::/tmp/serving-cert-2799090538/tls.key\\\\\\\"\\\\nI0220 00:11:33.049572 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0220 00:11:33.051555 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0220 00:11:33.051609 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0220 00:11:33.051659 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0220 00:11:33.051683 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0220 00:11:33.055751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0220 00:11:33.055801 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0220 00:11:33.055810 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0220 00:11:33.055818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0220 00:11:33.055825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0220 00:11:33.055830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0220 00:11:33.055836 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0220 00:11:33.055865 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0220 00:11:33.058673 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-20T00:11:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014046 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014130 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-cnibin\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014196 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014284 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-cnibin\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014436 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-cni-dir\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014489 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/34a35a47-a06d-4444-9141-580ed7777c52-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014579 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-daemon-config\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014644 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-run-multus-certs\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014741 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-os-release\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014793 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ebd4256-9121-496b-856b-910c268419c6-host\") pod \"node-ca-2b6r7\" (UID: \"7ebd4256-9121-496b-856b-910c268419c6\") " pod="openshift-image-registry/node-ca-2b6r7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014820 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-run-multus-certs\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014844 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-openvswitch\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014891 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-os-release\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014941 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/34a35a47-a06d-4444-9141-580ed7777c52-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.014990 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-os-release\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015017 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-hostroot\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015031 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-openvswitch\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015121 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-hostroot\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015219 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-run-k8s-cni-cncf-io\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015267 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015307 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34a35a47-a06d-4444-9141-580ed7777c52-os-release\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015328 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7ebd4256-9121-496b-856b-910c268419c6-serviceca\") pod \"node-ca-2b6r7\" (UID: \"7ebd4256-9121-496b-856b-910c268419c6\") " pod="openshift-image-registry/node-ca-2b6r7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015392 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e24ea4b0-1a34-4fb3-b40c-684c03795e07-host-run-k8s-cni-cncf-io\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015400 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pfpqm\" (UniqueName: \"kubernetes.io/projected/3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7-kube-api-access-pfpqm\") pod \"node-resolver-2zwfl\" (UID: \"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7\") " pod="openshift-dns/node-resolver-2zwfl" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015442 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-script-lib\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015457 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e24ea4b0-1a34-4fb3-b40c-684c03795e07-multus-daemon-config\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015510 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-node-log\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.015452 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-node-log\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016309 5119 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016354 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016355 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016401 5119 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016470 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016501 5119 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016524 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016580 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016612 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016635 5119 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016660 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016684 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016745 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016768 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016794 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016829 5119 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016851 5119 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016876 5119 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016899 5119 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016948 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016971 5119 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.016993 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017014 5119 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017066 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017087 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017106 5119 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017128 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017157 5119 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017178 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017200 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017231 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017254 5119 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017275 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017297 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017327 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017348 5119 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017369 5119 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017391 5119 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017419 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017441 5119 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017464 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017493 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017516 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017537 5119 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017579 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017618 5119 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017641 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017662 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017684 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017713 5119 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017743 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017764 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017793 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017821 5119 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017842 5119 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017863 5119 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017894 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017917 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017938 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017958 5119 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.017985 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.018010 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.018030 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.018051 5119 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.018079 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.018100 5119 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.018122 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.019845 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.019888 5119 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.019911 5119 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.019933 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.019963 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.019986 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020012 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020037 5119 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020078 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020100 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020123 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020145 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020173 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020194 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020215 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020243 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020267 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020288 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020320 5119 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020347 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020369 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020391 5119 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020413 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020442 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020464 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020487 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020522 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020569 5119 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020597 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020632 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020662 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020688 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020712 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020737 5119 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020765 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020787 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020809 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020831 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.020858 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.021106 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-proxy-tls\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.021155 5119 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.021238 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.021284 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.021317 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.021338 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.021359 5119 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.021389 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.021412 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.022077 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34a35a47-a06d-4444-9141-580ed7777c52-cni-binary-copy\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.023879 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.023920 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.028278 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/34a35a47-a06d-4444-9141-580ed7777c52-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.029487 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4lc9\" (UniqueName: \"kubernetes.io/projected/e24ea4b0-1a34-4fb3-b40c-684c03795e07-kube-api-access-v4lc9\") pod \"multus-rlzxr\" (UID: \"e24ea4b0-1a34-4fb3-b40c-684c03795e07\") " pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.030132 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/34a35a47-a06d-4444-9141-580ed7777c52-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.033903 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.033956 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.033985 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.034013 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.034033 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:42Z","lastTransitionTime":"2026-02-20T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.036057 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wn84\" (UniqueName: \"kubernetes.io/projected/a9bc4f8d-447f-4dd2-a865-6fd066513b13-kube-api-access-4wn84\") pod \"ovnkube-control-plane-57b78d8988-9qqjj\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.038285 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c57hk\" (UniqueName: \"kubernetes.io/projected/02bd3ad7-f57a-48e9-86d9-ca9c36a7218d-kube-api-access-c57hk\") pod \"machine-config-daemon-l7jjp\" (UID: \"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\") " pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.039307 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.040756 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.040900 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.041187 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.041201 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.041953 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.042194 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.042296 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.042387 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.042377 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.042466 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.042457 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.042782 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.042829 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.043282 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.043345 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.043494 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-966tq\" (UniqueName: \"kubernetes.io/projected/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-kube-api-access-966tq\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.043703 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.043803 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.044011 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.044185 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.044233 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.044289 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.044582 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.044603 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.044805 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.044723 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.045106 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.045027 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.045291 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovn-node-metrics-cert\") pod \"ovnkube-node-m42rs\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.046101 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.046127 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.046097 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.046214 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.046259 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.046443 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.047135 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.047236 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.047583 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.046372 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.047164 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.047677 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.050153 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.050158 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.050172 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.049773 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.050269 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.050671 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.048416 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26037562-e88d-495d-af20-5448eb2b5b97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://21a886847fb1015eec09cff82cb000eef0affc5c81aea9cb3247b939a42c96f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac6691d283aafa26b76bde45116abaa5c977877e0a499e7f032c8ecbb241d0cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3a6b81a15ece4cea96f108cd94199159531a44f069002237b6cb4f7239716d4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fdc287b96e8bcf9191f28a271ee71230df2b1cbc9085c1f59a2bbaf52da9fe89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.051266 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj84b\" (UniqueName: \"kubernetes.io/projected/34a35a47-a06d-4444-9141-580ed7777c52-kube-api-access-jj84b\") pod \"multus-additional-cni-plugins-m8j8p\" (UID: \"34a35a47-a06d-4444-9141-580ed7777c52\") " pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.051116 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.050952 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.050833 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfpqm\" (UniqueName: \"kubernetes.io/projected/3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7-kube-api-access-pfpqm\") pod \"node-resolver-2zwfl\" (UID: \"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7\") " pod="openshift-dns/node-resolver-2zwfl" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.051576 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.051858 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.052202 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.052610 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.052665 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.052907 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.052975 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.053422 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.057088 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.057107 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.057277 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.057508 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.058070 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.060375 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.060571 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.061262 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.061272 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.061408 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.061478 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.062081 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.062448 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.062492 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.062899 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.065428 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.065806 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.065962 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.066038 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.066184 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.066574 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.066726 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.066849 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.067048 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.067243 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.067394 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.067519 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.067957 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.067954 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.067958 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.068165 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.068272 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.067791 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.069322 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.069427 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.069518 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.069532 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.069590 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.069918 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.070418 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.070534 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.070675 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.070969 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.071048 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.071087 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.071615 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.072941 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.073255 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.073270 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.073749 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.073787 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.079784 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c57hk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c57hk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:11:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7jjp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.089771 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-2zwfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfpqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:11:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2zwfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.092001 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.101285 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9bc4f8d-447f-4dd2-a865-6fd066513b13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wn84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wn84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:11:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9qqjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.105065 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.112048 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.113286 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.115140 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.115948 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a35a47-a06d-4444-9141-580ed7777c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj84b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj84b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj84b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj84b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj84b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj84b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jj84b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:11:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m8j8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.122970 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123036 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ebd4256-9121-496b-856b-910c268419c6-host\") pod \"node-ca-2b6r7\" (UID: \"7ebd4256-9121-496b-856b-910c268419c6\") " pod="openshift-image-registry/node-ca-2b6r7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123079 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7ebd4256-9121-496b-856b-910c268419c6-serviceca\") pod \"node-ca-2b6r7\" (UID: \"7ebd4256-9121-496b-856b-910c268419c6\") " pod="openshift-image-registry/node-ca-2b6r7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123116 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123212 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ebd4256-9121-496b-856b-910c268419c6-host\") pod \"node-ca-2b6r7\" (UID: \"7ebd4256-9121-496b-856b-910c268419c6\") " pod="openshift-image-registry/node-ca-2b6r7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123275 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zj2x\" (UniqueName: \"kubernetes.io/projected/7ebd4256-9121-496b-856b-910c268419c6-kube-api-access-2zj2x\") pod \"node-ca-2b6r7\" (UID: \"7ebd4256-9121-496b-856b-910c268419c6\") " pod="openshift-image-registry/node-ca-2b6r7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123333 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123458 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fc7zs\" (UniqueName: \"kubernetes.io/projected/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-kube-api-access-fc7zs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.123494 5119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.123582 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs podName:00a91a87-0ad1-4805-a686-42ea9dfa6bb9 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:42.623562334 +0000 UTC m=+84.602526626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs") pod "network-metrics-daemon-vnzx8" (UID: "00a91a87-0ad1-4805-a686-42ea9dfa6bb9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123612 5119 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123636 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123653 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123670 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123694 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123708 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123721 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123737 5119 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123750 5119 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123765 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123778 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123795 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123806 5119 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123818 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123829 5119 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123841 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123852 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123864 5119 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123875 5119 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123886 5119 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123898 5119 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123910 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123924 5119 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123938 5119 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123951 5119 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123964 5119 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123976 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123988 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.123999 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124015 5119 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124029 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124040 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124053 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124064 5119 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124075 5119 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124087 5119 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124098 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124110 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124122 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124133 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124144 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124156 5119 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124170 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124181 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124193 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124204 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124216 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124227 5119 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124237 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124249 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124264 5119 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124277 5119 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124289 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124301 5119 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124315 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124327 5119 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124339 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124353 5119 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124364 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124375 5119 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124389 5119 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124400 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124411 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124424 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124435 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124446 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124457 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124469 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124482 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124496 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124509 5119 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124519 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124531 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124575 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124589 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124582 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7ebd4256-9121-496b-856b-910c268419c6-serviceca\") pod \"node-ca-2b6r7\" (UID: \"7ebd4256-9121-496b-856b-910c268419c6\") " pod="openshift-image-registry/node-ca-2b6r7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124601 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124691 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124718 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124744 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124768 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124797 5119 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124821 5119 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124846 5119 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124871 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124898 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124923 5119 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124948 5119 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.124975 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125001 5119 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125069 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125096 5119 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125122 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125145 5119 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125169 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125201 5119 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125226 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125250 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125281 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125309 5119 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125335 5119 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125363 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125388 5119 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125413 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125439 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125471 5119 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125501 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125530 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125593 5119 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.125618 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.129235 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c2913e9-8431-420a-ac4a-a2294390579d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c382bde8ab381f15e8e6ca6875bd303dfc59d8c2c14de3de4a0e317a888901d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b492ac1d6abbc4a4afef8b9614505249ae7efcd38c8fcd48a0c97f1e392c2e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bada530ab3542294459927d7a185cd77313350302b7e29d8901ce07eeea3a472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://07c89f665f0945400baa411dbc321ad8bb30b8661b4f89381364b6cea6c3c817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07c89f665f0945400baa411dbc321ad8bb30b8661b4f89381364b6cea6c3c817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.136321 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.140900 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.140935 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.140945 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.140961 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.140972 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:42Z","lastTransitionTime":"2026-02-20T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.140048 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b94ddb7-30bb-4acb-9377-089990a7c2fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ca51d552b077446f07780f028dbef71472913f569131068f1107adf6ca6b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://687a28d8696979239e51443720daad7275b73f5e8a04f2c4ba1625ca22149724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687a28d8696979239e51443720daad7275b73f5e8a04f2c4ba1625ca22149724\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.141990 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zj2x\" (UniqueName: \"kubernetes.io/projected/7ebd4256-9121-496b-856b-910c268419c6-kube-api-access-2zj2x\") pod \"node-ca-2b6r7\" (UID: \"7ebd4256-9121-496b-856b-910c268419c6\") " pod="openshift-image-registry/node-ca-2b6r7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.146237 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc7zs\" (UniqueName: \"kubernetes.io/projected/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-kube-api-access-fc7zs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.155628 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.159455 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.168497 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.182179 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.199228 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2zwfl" Feb 20 00:11:42 crc kubenswrapper[5119]: W0220 00:11:42.199686 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02bd3ad7_f57a_48e9_86d9_ca9c36a7218d.slice/crio-616aff511e8acc09d43e910b36a052f87aa910eb8f9bb2a3f953020d0c8fd791 WatchSource:0}: Error finding container 616aff511e8acc09d43e910b36a052f87aa910eb8f9bb2a3f953020d0c8fd791: Status 404 returned error can't find the container with id 616aff511e8acc09d43e910b36a052f87aa910eb8f9bb2a3f953020d0c8fd791 Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.207006 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2b6r7" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.210416 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d2948d-b639-4aa8-8486-078a1eb1f1c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://c70b091e898caee84f634a04ecdd706530864da43fe24e1463f6db14172dd6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://24cfd3f7eadbcc71f1db924139cf5009452536da3cb89e686032ce703577be4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9410342c809c3092f734e0f6d40993c834cede3ba6bec4a3eac0bfe03b0769de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c04c59d79a01688cc5bf8aeefc1e779191d870147e49cc4215f610dc453f476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b6ebc5f8bbc7fbe8b2df8925c5e948ed178d1bebbf72ed38d16b595cb57c3168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://93a050e7bbe89c9df64bd2d4f49af71665c0e01049e964897414a8ba45dabf06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93a050e7bbe89c9df64bd2d4f49af71665c0e01049e964897414a8ba45dabf06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://14a5c1db348a6c359e6211876a93e9f3162c8962610e30000970a311eb974948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14a5c1db348a6c359e6211876a93e9f3162c8962610e30000970a311eb974948\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:10:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://625e626759f6cd69ada64fc36c5a5d994b1b46a7b2495c2aabd1ccefbc71d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://625e626759f6cd69ada64fc36c5a5d994b1b46a7b2495c2aabd1ccefbc71d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:10:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:10:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.212809 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.222751 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.225664 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.228122 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.234455 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rlzxr" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.240712 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:11:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.245954 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.246015 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.246036 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.246061 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.246082 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:42Z","lastTransitionTime":"2026-02-20T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:42 crc kubenswrapper[5119]: W0220 00:11:42.271822 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9bc4f8d_447f_4dd2_a865_6fd066513b13.slice/crio-50d5426bd1c1e4bb6a8ce1a0186057730660b09857e6fa7a9c11e2c8ce105d81 WatchSource:0}: Error finding container 50d5426bd1c1e4bb6a8ce1a0186057730660b09857e6fa7a9c11e2c8ce105d81: Status 404 returned error can't find the container with id 50d5426bd1c1e4bb6a8ce1a0186057730660b09857e6fa7a9c11e2c8ce105d81 Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.273515 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2b6r7" event={"ID":"7ebd4256-9121-496b-856b-910c268419c6","Type":"ContainerStarted","Data":"e273a0fb9a866d7497907758d626505218265101487933482ef1fd4a789d8900"} Feb 20 00:11:42 crc kubenswrapper[5119]: W0220 00:11:42.280120 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34a35a47_a06d_4444_9141_580ed7777c52.slice/crio-0d2dd5c42224f54677a90ce54e9834477b82652566df669ec9e6143086df72b0 WatchSource:0}: Error finding container 0d2dd5c42224f54677a90ce54e9834477b82652566df669ec9e6143086df72b0: Status 404 returned error can't find the container with id 0d2dd5c42224f54677a90ce54e9834477b82652566df669ec9e6143086df72b0 Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.281372 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerStarted","Data":"616aff511e8acc09d43e910b36a052f87aa910eb8f9bb2a3f953020d0c8fd791"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.284151 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"3680eb31a37f2f2689beb3d0fd18f3be4c194ace3665dbed59682ca2fa11f8f3"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.285416 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"3b959d8e200aba3cdb549a831219dee5687cdcded879c45e3f24eed69590545f"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.289004 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlzxr" event={"ID":"e24ea4b0-1a34-4fb3-b40c-684c03795e07","Type":"ContainerStarted","Data":"a8490063c9dd244ae42dda5a258be2f08ca21e4abb220af942ee9973ce9b538c"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.290288 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerStarted","Data":"f5b8dd6de490c18ed3c78de138e433e9027403c6809d8f85b0e22bc24620f10a"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.291146 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2zwfl" event={"ID":"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7","Type":"ContainerStarted","Data":"0bcf59871147fbc25439aafe7f391f88e620879ee5238b191115047a1416c6a8"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.293914 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"b09243649f2436d9d119f73597dd4e83b1ff50055ca4c312caab58c612316486"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.362919 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.363388 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.363401 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.363420 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.363433 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:42Z","lastTransitionTime":"2026-02-20T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.428909 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.428971 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.429023 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.429064 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.429172 5119 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.429245 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:43.42922481 +0000 UTC m=+85.408189102 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.429678 5119 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.429724 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.429796 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:43.429761514 +0000 UTC m=+85.408725796 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.429802 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.429819 5119 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.430526 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:43.429874267 +0000 UTC m=+85.408838749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.431646 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.431680 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.431713 5119 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.431798 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:43.431770577 +0000 UTC m=+85.410734879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.466642 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.466697 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.466706 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.466725 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.466739 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:42Z","lastTransitionTime":"2026-02-20T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.530213 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.530934 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:11:43.530900812 +0000 UTC m=+85.509865104 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.569097 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.569148 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.569157 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.569175 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.569187 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:42Z","lastTransitionTime":"2026-02-20T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.631101 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.631392 5119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: E0220 00:11:42.631482 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs podName:00a91a87-0ad1-4805-a686-42ea9dfa6bb9 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:43.631461215 +0000 UTC m=+85.610425507 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs") pod "network-metrics-daemon-vnzx8" (UID: "00a91a87-0ad1-4805-a686-42ea9dfa6bb9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.675051 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.675102 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.675116 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.675137 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.675152 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:42Z","lastTransitionTime":"2026-02-20T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.777317 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.777382 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.777394 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.777414 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.777426 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:42Z","lastTransitionTime":"2026-02-20T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.863814 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.865685 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.868589 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.872020 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.879021 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.881014 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.881057 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.881067 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.881083 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.881093 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:42Z","lastTransitionTime":"2026-02-20T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.882460 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.883597 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.885995 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.886728 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.888526 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.891201 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.894085 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.895081 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.899634 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.900300 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.902150 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.903185 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.905235 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.909235 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.911660 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.914867 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.919270 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.922131 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.925205 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.927341 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.930299 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.933918 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.935356 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.940483 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.941679 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.946089 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.947864 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.952707 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.954766 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.956919 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.957773 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.959117 5119 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.959262 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.963138 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.965771 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.967691 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.969289 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.970080 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.972432 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.974312 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.975805 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.977220 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.980078 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.984571 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.984635 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.984655 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.984679 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.984699 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:42Z","lastTransitionTime":"2026-02-20T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.984884 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.986235 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.988407 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.989947 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.992072 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.994073 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Feb 20 00:11:42 crc kubenswrapper[5119]: I0220 00:11:42.998767 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.000884 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.002726 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.004193 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.088253 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.088314 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.088335 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.088357 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.088375 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:43Z","lastTransitionTime":"2026-02-20T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.190888 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.190948 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.190960 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.190976 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.190987 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:43Z","lastTransitionTime":"2026-02-20T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.293578 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.293653 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.293672 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.293698 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.293714 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:43Z","lastTransitionTime":"2026-02-20T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.301421 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2b6r7" event={"ID":"7ebd4256-9121-496b-856b-910c268419c6","Type":"ContainerStarted","Data":"e9c13975bdd3ffbc9b492fc05c14bce14e7003fdd5e4370a5df1db85db56f358"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.304675 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerStarted","Data":"58520dd946442e7c17bd5d20b0d13573e8710c9a89bd1ea811f31a87dddb89fc"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.305172 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerStarted","Data":"e72863f3bb34d69a32e4bf16d58f08f3318fc63f4aed8833baffafd71c833abb"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.306362 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"6331147fd3ef8c0a84ab883eafe2f01b550b04ccb16dd7ec3834a5ee7d231ef1"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.308731 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" event={"ID":"a9bc4f8d-447f-4dd2-a865-6fd066513b13","Type":"ContainerStarted","Data":"2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.308845 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" event={"ID":"a9bc4f8d-447f-4dd2-a865-6fd066513b13","Type":"ContainerStarted","Data":"22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.308867 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" event={"ID":"a9bc4f8d-447f-4dd2-a865-6fd066513b13","Type":"ContainerStarted","Data":"50d5426bd1c1e4bb6a8ce1a0186057730660b09857e6fa7a9c11e2c8ce105d81"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.310699 5119 generic.go:358] "Generic (PLEG): container finished" podID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerID="d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362" exitCode=0 Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.310866 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerDied","Data":"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.315507 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"19cdf49ee86923ebeac1d424f8a793b700d92c7faed202da734401a698d6d22d"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.315613 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"979d5096a073e928ada6608d80afddda4c36f9b0b54bf4f306cc7e6b019b3075"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.319572 5119 generic.go:358] "Generic (PLEG): container finished" podID="34a35a47-a06d-4444-9141-580ed7777c52" containerID="fdbd2919e27ed8dba000d806e570169cb86f8a84b62da40d0b318c9292e88fd2" exitCode=0 Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.319619 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" event={"ID":"34a35a47-a06d-4444-9141-580ed7777c52","Type":"ContainerDied","Data":"fdbd2919e27ed8dba000d806e570169cb86f8a84b62da40d0b318c9292e88fd2"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.319688 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" event={"ID":"34a35a47-a06d-4444-9141-580ed7777c52","Type":"ContainerStarted","Data":"0d2dd5c42224f54677a90ce54e9834477b82652566df669ec9e6143086df72b0"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.325133 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlzxr" event={"ID":"e24ea4b0-1a34-4fb3-b40c-684c03795e07","Type":"ContainerStarted","Data":"bff4e744bb7114819286cb3231d0747843a1d7f8308e2439bc6b2ed6e66a9ca9"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.334435 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2zwfl" event={"ID":"3b19fcd1-929b-4b51-8cd2-b0af16c7a5f7","Type":"ContainerStarted","Data":"c4379a5304ee0d8e2abd39bd2e58e2d8b05b489284d7e956c6e504b319ed421c"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.394190 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.394155383 podStartE2EDuration="2.394155383s" podCreationTimestamp="2026-02-20 00:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:43.369935493 +0000 UTC m=+85.348899855" watchObservedRunningTime="2026-02-20 00:11:43.394155383 +0000 UTC m=+85.373119715" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.396270 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.396318 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.396338 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.396364 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.396384 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:43Z","lastTransitionTime":"2026-02-20T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.438401 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-2b6r7" podStartSLOduration=64.438383591 podStartE2EDuration="1m4.438383591s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:43.438123765 +0000 UTC m=+85.417088087" watchObservedRunningTime="2026-02-20 00:11:43.438383591 +0000 UTC m=+85.417347883" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.440723 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.441128 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.441182 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.441211 5119 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.441328 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:45.44129242 +0000 UTC m=+87.420256762 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.444760 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.445068 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.445577 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.447519 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.447615 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.447648 5119 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.447775 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:45.447736303 +0000 UTC m=+87.426700645 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.451361 5119 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.452464 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:45.452439329 +0000 UTC m=+87.431403631 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.454123 5119 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.454250 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:45.454219538 +0000 UTC m=+87.433183970 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.499239 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.499322 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.499345 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.499373 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.499395 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:43Z","lastTransitionTime":"2026-02-20T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.543507 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.543473026 podStartE2EDuration="2.543473026s" podCreationTimestamp="2026-02-20 00:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:43.543309482 +0000 UTC m=+85.522273784" watchObservedRunningTime="2026-02-20 00:11:43.543473026 +0000 UTC m=+85.522437318" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.546678 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.546911 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:11:45.546833327 +0000 UTC m=+87.525797639 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.603215 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.603254 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.603264 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.603283 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.603294 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:43Z","lastTransitionTime":"2026-02-20T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.646090 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=2.646067584 podStartE2EDuration="2.646067584s" podCreationTimestamp="2026-02-20 00:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:43.644904632 +0000 UTC m=+85.623868934" watchObservedRunningTime="2026-02-20 00:11:43.646067584 +0000 UTC m=+85.625031886" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.650134 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.650372 5119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.650502 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs podName:00a91a87-0ad1-4805-a686-42ea9dfa6bb9 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:45.650425061 +0000 UTC m=+87.629389373 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs") pod "network-metrics-daemon-vnzx8" (UID: "00a91a87-0ad1-4805-a686-42ea9dfa6bb9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.655807 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.6557909950000003 podStartE2EDuration="2.655790995s" podCreationTimestamp="2026-02-20 00:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:43.655560919 +0000 UTC m=+85.634525221" watchObservedRunningTime="2026-02-20 00:11:43.655790995 +0000 UTC m=+85.634755287" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.707672 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.708042 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.708056 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.708072 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.708087 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:43Z","lastTransitionTime":"2026-02-20T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.809760 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.809801 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.809811 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.809826 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.809837 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:43Z","lastTransitionTime":"2026-02-20T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.848132 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rlzxr" podStartSLOduration=64.848111894 podStartE2EDuration="1m4.848111894s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:43.847658682 +0000 UTC m=+85.826622974" watchObservedRunningTime="2026-02-20 00:11:43.848111894 +0000 UTC m=+85.827076186" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.855748 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.855748 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.855880 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.855979 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vnzx8" podUID="00a91a87-0ad1-4805-a686-42ea9dfa6bb9" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.856005 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.856238 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.856295 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:43 crc kubenswrapper[5119]: E0220 00:11:43.856383 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.865411 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podStartSLOduration=64.865388458 podStartE2EDuration="1m4.865388458s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:43.861825502 +0000 UTC m=+85.840789794" watchObservedRunningTime="2026-02-20 00:11:43.865388458 +0000 UTC m=+85.844352740" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.880740 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2zwfl" podStartSLOduration=64.88071841 podStartE2EDuration="1m4.88071841s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:43.879438885 +0000 UTC m=+85.858403177" watchObservedRunningTime="2026-02-20 00:11:43.88071841 +0000 UTC m=+85.859682702" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.912580 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.912631 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.912647 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.912672 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.912691 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:43Z","lastTransitionTime":"2026-02-20T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:43 crc kubenswrapper[5119]: I0220 00:11:43.918816 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" podStartSLOduration=64.918800084 podStartE2EDuration="1m4.918800084s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:43.898421016 +0000 UTC m=+85.877385328" watchObservedRunningTime="2026-02-20 00:11:43.918800084 +0000 UTC m=+85.897764386" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.015179 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.015223 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.015234 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.015249 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.015259 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:44Z","lastTransitionTime":"2026-02-20T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.118249 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.118314 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.118330 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.118361 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.118376 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:44Z","lastTransitionTime":"2026-02-20T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.221885 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.221945 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.221962 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.221982 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.221997 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:44Z","lastTransitionTime":"2026-02-20T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.324852 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.324906 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.324920 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.324942 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.324957 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:44Z","lastTransitionTime":"2026-02-20T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.350778 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerStarted","Data":"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.350849 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerStarted","Data":"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.350865 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerStarted","Data":"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.350877 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerStarted","Data":"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.353857 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" event={"ID":"34a35a47-a06d-4444-9141-580ed7777c52","Type":"ContainerStarted","Data":"d3915888b364242d9f3d0cba19da75cbe17b7a6810467a69986e2a515a3b64bf"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.429064 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.429135 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.429149 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.429172 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.429187 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:44Z","lastTransitionTime":"2026-02-20T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.531443 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.531961 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.531974 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.531992 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.532007 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:44Z","lastTransitionTime":"2026-02-20T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.634745 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.634792 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.634802 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.634819 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.634830 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:44Z","lastTransitionTime":"2026-02-20T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.737483 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.737565 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.737578 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.737601 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.737619 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:44Z","lastTransitionTime":"2026-02-20T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.840248 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.840307 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.840317 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.840334 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.840346 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:44Z","lastTransitionTime":"2026-02-20T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.941990 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.942041 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.942054 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.942075 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:44 crc kubenswrapper[5119]: I0220 00:11:44.942088 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:44Z","lastTransitionTime":"2026-02-20T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.043884 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.043933 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.043948 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.043967 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.043978 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:45Z","lastTransitionTime":"2026-02-20T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.147041 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.147095 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.147107 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.147123 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.147135 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:45Z","lastTransitionTime":"2026-02-20T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.250676 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.250734 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.250747 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.250769 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.250784 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:45Z","lastTransitionTime":"2026-02-20T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.354493 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.354597 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.354616 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.354643 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.354668 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:45Z","lastTransitionTime":"2026-02-20T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.360225 5119 generic.go:358] "Generic (PLEG): container finished" podID="34a35a47-a06d-4444-9141-580ed7777c52" containerID="d3915888b364242d9f3d0cba19da75cbe17b7a6810467a69986e2a515a3b64bf" exitCode=0 Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.360383 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" event={"ID":"34a35a47-a06d-4444-9141-580ed7777c52","Type":"ContainerDied","Data":"d3915888b364242d9f3d0cba19da75cbe17b7a6810467a69986e2a515a3b64bf"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.367687 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerStarted","Data":"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.367770 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerStarted","Data":"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.461338 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.461421 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.461440 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.461478 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.461500 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:45Z","lastTransitionTime":"2026-02-20T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.473781 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.473862 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.474049 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.474077 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.474096 5119 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.474171 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:49.474144545 +0000 UTC m=+91.453108847 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.474242 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.474288 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.474305 5119 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.474440 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:49.474414894 +0000 UTC m=+91.453379186 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.474606 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.474698 5119 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.474737 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:49.474729002 +0000 UTC m=+91.453693294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.475039 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.475171 5119 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.475243 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:49.475228725 +0000 UTC m=+91.454193027 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.564464 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.564522 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.564535 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.564606 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.564622 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:45Z","lastTransitionTime":"2026-02-20T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.576306 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.576711 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:11:49.576683132 +0000 UTC m=+91.555647444 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.669928 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.670416 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.670431 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.670446 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.670456 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:45Z","lastTransitionTime":"2026-02-20T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.678084 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.678414 5119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.678610 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs podName:00a91a87-0ad1-4805-a686-42ea9dfa6bb9 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:49.67857314 +0000 UTC m=+91.657537472 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs") pod "network-metrics-daemon-vnzx8" (UID: "00a91a87-0ad1-4805-a686-42ea9dfa6bb9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.773701 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.773777 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.773827 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.773870 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.773882 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:45Z","lastTransitionTime":"2026-02-20T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.855587 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.855651 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.855671 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.855944 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.856110 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.856221 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.856224 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:45 crc kubenswrapper[5119]: E0220 00:11:45.856373 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vnzx8" podUID="00a91a87-0ad1-4805-a686-42ea9dfa6bb9" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.857625 5119 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.876871 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.876923 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.876933 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.876949 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.876959 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:45Z","lastTransitionTime":"2026-02-20T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.979587 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.979643 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.979656 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.979676 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:45 crc kubenswrapper[5119]: I0220 00:11:45.979691 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:45Z","lastTransitionTime":"2026-02-20T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.083434 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.083486 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.083499 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.083517 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.083530 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:46Z","lastTransitionTime":"2026-02-20T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.186064 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.186105 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.186114 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.186133 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.186143 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:46Z","lastTransitionTime":"2026-02-20T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.289326 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.289680 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.289851 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.289999 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.290193 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:46Z","lastTransitionTime":"2026-02-20T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.376626 5119 generic.go:358] "Generic (PLEG): container finished" podID="34a35a47-a06d-4444-9141-580ed7777c52" containerID="ec5f9bcd289f8a6fb0c0198a7ef7f2391732165c960c6aba5dbd68c0d7b8f4fd" exitCode=0 Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.376752 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" event={"ID":"34a35a47-a06d-4444-9141-580ed7777c52","Type":"ContainerDied","Data":"ec5f9bcd289f8a6fb0c0198a7ef7f2391732165c960c6aba5dbd68c0d7b8f4fd"} Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.378824 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"651f02dd1ad92f0c89189811ccf6931bb8f05db3e7216dc3536de73a1c668722"} Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.393279 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.393353 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.393374 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.393400 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.393419 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:46Z","lastTransitionTime":"2026-02-20T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.496796 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.496860 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.496875 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.496897 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.496911 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:46Z","lastTransitionTime":"2026-02-20T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.599041 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.599101 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.599117 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.599139 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.599153 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:46Z","lastTransitionTime":"2026-02-20T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.701280 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.701330 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.701344 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.701364 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.701378 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:46Z","lastTransitionTime":"2026-02-20T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.810135 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.810210 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.810231 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.810264 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.810287 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:46Z","lastTransitionTime":"2026-02-20T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.913097 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.913149 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.913160 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.913174 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:46 crc kubenswrapper[5119]: I0220 00:11:46.913185 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:46Z","lastTransitionTime":"2026-02-20T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.016103 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.016187 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.016208 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.016239 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.016259 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:47Z","lastTransitionTime":"2026-02-20T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.119258 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.119325 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.119352 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.119383 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.119404 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:47Z","lastTransitionTime":"2026-02-20T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.221823 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.221884 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.221899 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.221924 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.221942 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:47Z","lastTransitionTime":"2026-02-20T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.324186 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.324247 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.324258 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.324275 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.324293 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:47Z","lastTransitionTime":"2026-02-20T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.389813 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerStarted","Data":"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.392583 5119 generic.go:358] "Generic (PLEG): container finished" podID="34a35a47-a06d-4444-9141-580ed7777c52" containerID="89ac53248a7a5ffe6b68f0141064a4fcec75f4472b638287607a0b2c0929c4b5" exitCode=0 Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.392643 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" event={"ID":"34a35a47-a06d-4444-9141-580ed7777c52","Type":"ContainerDied","Data":"89ac53248a7a5ffe6b68f0141064a4fcec75f4472b638287607a0b2c0929c4b5"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.428831 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.428910 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.428928 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.428952 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.428971 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:47Z","lastTransitionTime":"2026-02-20T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.531642 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.531701 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.531714 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.531738 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.531751 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:47Z","lastTransitionTime":"2026-02-20T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.634149 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.634206 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.634220 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.634242 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.634255 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:47Z","lastTransitionTime":"2026-02-20T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.736581 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.736637 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.736652 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.736670 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.736684 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:47Z","lastTransitionTime":"2026-02-20T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.839066 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.839120 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.839135 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.839153 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.839167 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:47Z","lastTransitionTime":"2026-02-20T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.854925 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:47 crc kubenswrapper[5119]: E0220 00:11:47.855047 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vnzx8" podUID="00a91a87-0ad1-4805-a686-42ea9dfa6bb9" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.855129 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.855255 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:47 crc kubenswrapper[5119]: E0220 00:11:47.855434 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.855445 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:47 crc kubenswrapper[5119]: E0220 00:11:47.855553 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:11:47 crc kubenswrapper[5119]: E0220 00:11:47.855776 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.942105 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.942180 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.942199 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.942222 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:47 crc kubenswrapper[5119]: I0220 00:11:47.942237 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:47Z","lastTransitionTime":"2026-02-20T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.005994 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.006067 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.006085 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.006110 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.006130 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:11:48Z","lastTransitionTime":"2026-02-20T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.061169 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx"] Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.295305 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.299204 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.301166 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.301845 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.302114 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.399687 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" event={"ID":"34a35a47-a06d-4444-9141-580ed7777c52","Type":"ContainerStarted","Data":"675f9b423e5415a62396e54325caa0b663ddf6e899d009f4e8d06823615adc57"} Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.408451 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02c784cd-812b-4550-8911-536fee85f710-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.408786 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02c784cd-812b-4550-8911-536fee85f710-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.408868 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02c784cd-812b-4550-8911-536fee85f710-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.408908 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c784cd-812b-4550-8911-536fee85f710-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.408940 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02c784cd-812b-4550-8911-536fee85f710-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.509974 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02c784cd-812b-4550-8911-536fee85f710-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.510352 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02c784cd-812b-4550-8911-536fee85f710-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.510477 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02c784cd-812b-4550-8911-536fee85f710-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.510607 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c784cd-812b-4550-8911-536fee85f710-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.510715 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02c784cd-812b-4550-8911-536fee85f710-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.511239 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02c784cd-812b-4550-8911-536fee85f710-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.512327 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02c784cd-812b-4550-8911-536fee85f710-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.512504 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02c784cd-812b-4550-8911-536fee85f710-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.524373 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c784cd-812b-4550-8911-536fee85f710-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.540955 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02c784cd-812b-4550-8911-536fee85f710-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-pnkjx\" (UID: \"02c784cd-812b-4550-8911-536fee85f710\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.627945 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" Feb 20 00:11:48 crc kubenswrapper[5119]: W0220 00:11:48.650017 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02c784cd_812b_4550_8911_536fee85f710.slice/crio-cf47423cff1dbcfdf8c7fda91626b261ad96a3513bc9b202a32ae099698b4d06 WatchSource:0}: Error finding container cf47423cff1dbcfdf8c7fda91626b261ad96a3513bc9b202a32ae099698b4d06: Status 404 returned error can't find the container with id cf47423cff1dbcfdf8c7fda91626b261ad96a3513bc9b202a32ae099698b4d06 Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.803802 5119 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 20 00:11:48 crc kubenswrapper[5119]: I0220 00:11:48.829366 5119 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.411565 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerStarted","Data":"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb"} Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.412166 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.412189 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.419267 5119 generic.go:358] "Generic (PLEG): container finished" podID="34a35a47-a06d-4444-9141-580ed7777c52" containerID="675f9b423e5415a62396e54325caa0b663ddf6e899d009f4e8d06823615adc57" exitCode=0 Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.419587 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" event={"ID":"34a35a47-a06d-4444-9141-580ed7777c52","Type":"ContainerDied","Data":"675f9b423e5415a62396e54325caa0b663ddf6e899d009f4e8d06823615adc57"} Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.428501 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" event={"ID":"02c784cd-812b-4550-8911-536fee85f710","Type":"ContainerStarted","Data":"12de55f68cbcc9de881ddaa782631ac13e6ec79d43b7ba7b057af4dc659e1954"} Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.428586 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" event={"ID":"02c784cd-812b-4550-8911-536fee85f710","Type":"ContainerStarted","Data":"cf47423cff1dbcfdf8c7fda91626b261ad96a3513bc9b202a32ae099698b4d06"} Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.454986 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" podStartSLOduration=70.454951386 podStartE2EDuration="1m10.454951386s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:49.449298983 +0000 UTC m=+91.428263315" watchObservedRunningTime="2026-02-20 00:11:49.454951386 +0000 UTC m=+91.433915748" Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.461957 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.524159 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.524320 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.524337 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.524350 5119 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.524409 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:57.524393002 +0000 UTC m=+99.503357294 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.524780 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.524895 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.525161 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.525284 5119 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.525373 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:57.525347728 +0000 UTC m=+99.504312050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.525480 5119 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.525537 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:57.525524872 +0000 UTC m=+99.504489204 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.525492 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.526350 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.526359 5119 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.526386 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:57.526376586 +0000 UTC m=+99.505340878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.554388 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-pnkjx" podStartSLOduration=70.554349517 podStartE2EDuration="1m10.554349517s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:49.500437428 +0000 UTC m=+91.479401750" watchObservedRunningTime="2026-02-20 00:11:49.554349517 +0000 UTC m=+91.533313849" Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.626404 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.626660 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:11:57.626626789 +0000 UTC m=+99.605591101 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.728029 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.728259 5119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.728628 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs podName:00a91a87-0ad1-4805-a686-42ea9dfa6bb9 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:57.72860059 +0000 UTC m=+99.707565082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs") pod "network-metrics-daemon-vnzx8" (UID: "00a91a87-0ad1-4805-a686-42ea9dfa6bb9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.855642 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.855827 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vnzx8" podUID="00a91a87-0ad1-4805-a686-42ea9dfa6bb9" Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.856269 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.856365 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:49 crc kubenswrapper[5119]: I0220 00:11:49.856374 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.856505 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.856716 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:11:49 crc kubenswrapper[5119]: E0220 00:11:49.857172 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:11:50 crc kubenswrapper[5119]: I0220 00:11:50.438397 5119 generic.go:358] "Generic (PLEG): container finished" podID="34a35a47-a06d-4444-9141-580ed7777c52" containerID="6a199e5fb64fe6d3d04ebd16c2d09222e327cfac94c90b775a665330263b9ddc" exitCode=0 Feb 20 00:11:50 crc kubenswrapper[5119]: I0220 00:11:50.438483 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" event={"ID":"34a35a47-a06d-4444-9141-580ed7777c52","Type":"ContainerDied","Data":"6a199e5fb64fe6d3d04ebd16c2d09222e327cfac94c90b775a665330263b9ddc"} Feb 20 00:11:50 crc kubenswrapper[5119]: I0220 00:11:50.440086 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:50 crc kubenswrapper[5119]: I0220 00:11:50.481231 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:11:51 crc kubenswrapper[5119]: I0220 00:11:51.448532 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" event={"ID":"34a35a47-a06d-4444-9141-580ed7777c52","Type":"ContainerStarted","Data":"d4b62f395d4b0fd2b9cd84c77c8da17a714ea4e062f33b47151caa3d12a9efff"} Feb 20 00:11:51 crc kubenswrapper[5119]: I0220 00:11:51.828555 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-m8j8p" podStartSLOduration=72.828510128 podStartE2EDuration="1m12.828510128s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:51.521791285 +0000 UTC m=+93.500755587" watchObservedRunningTime="2026-02-20 00:11:51.828510128 +0000 UTC m=+93.807474420" Feb 20 00:11:51 crc kubenswrapper[5119]: I0220 00:11:51.829776 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vnzx8"] Feb 20 00:11:51 crc kubenswrapper[5119]: I0220 00:11:51.829939 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:51 crc kubenswrapper[5119]: E0220 00:11:51.830060 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vnzx8" podUID="00a91a87-0ad1-4805-a686-42ea9dfa6bb9" Feb 20 00:11:51 crc kubenswrapper[5119]: I0220 00:11:51.855805 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:51 crc kubenswrapper[5119]: I0220 00:11:51.855875 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:51 crc kubenswrapper[5119]: I0220 00:11:51.855949 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:51 crc kubenswrapper[5119]: E0220 00:11:51.856050 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:11:51 crc kubenswrapper[5119]: E0220 00:11:51.856280 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:11:51 crc kubenswrapper[5119]: E0220 00:11:51.856412 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:11:53 crc kubenswrapper[5119]: I0220 00:11:53.855172 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:53 crc kubenswrapper[5119]: I0220 00:11:53.855473 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:53 crc kubenswrapper[5119]: E0220 00:11:53.855471 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:11:53 crc kubenswrapper[5119]: E0220 00:11:53.856244 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vnzx8" podUID="00a91a87-0ad1-4805-a686-42ea9dfa6bb9" Feb 20 00:11:53 crc kubenswrapper[5119]: I0220 00:11:53.856333 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:53 crc kubenswrapper[5119]: I0220 00:11:53.856287 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:53 crc kubenswrapper[5119]: E0220 00:11:53.856513 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:11:53 crc kubenswrapper[5119]: I0220 00:11:53.856613 5119 scope.go:117] "RemoveContainer" containerID="f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf" Feb 20 00:11:53 crc kubenswrapper[5119]: E0220 00:11:53.856664 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:11:53 crc kubenswrapper[5119]: E0220 00:11:53.857044 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:11:55 crc kubenswrapper[5119]: I0220 00:11:55.856060 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:55 crc kubenswrapper[5119]: I0220 00:11:55.856061 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:55 crc kubenswrapper[5119]: I0220 00:11:55.856093 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:55 crc kubenswrapper[5119]: E0220 00:11:55.857601 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vnzx8" podUID="00a91a87-0ad1-4805-a686-42ea9dfa6bb9" Feb 20 00:11:55 crc kubenswrapper[5119]: E0220 00:11:55.857567 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:11:55 crc kubenswrapper[5119]: I0220 00:11:55.856143 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:55 crc kubenswrapper[5119]: E0220 00:11:55.857879 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:11:55 crc kubenswrapper[5119]: E0220 00:11:55.858760 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.603950 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.604705 5119 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.656338 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29525760-qdjzk"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.662270 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.662596 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29525760-qdjzk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.666315 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.667278 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.668501 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.668904 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.672639 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.672844 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.673390 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-8qbnc"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.676959 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.677164 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jnprp"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.677187 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.677720 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.678462 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.683204 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-psrg4"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.684369 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.685279 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.693112 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.702021 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.702259 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.702421 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.702806 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.707721 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.707789 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.708148 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.708373 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.708580 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.708721 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.708997 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.709085 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.709028 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.709246 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.709318 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.709414 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.710891 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.711126 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.711872 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-45lmz"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.712194 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.712328 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.713750 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.714657 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-image-import-ca\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.714699 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-serving-cert\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.714734 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-client-ca\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.714772 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-config\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.714805 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fghs\" (UniqueName: \"kubernetes.io/projected/e309235a-31b7-4789-9ba3-5839cab177a6-kube-api-access-7fghs\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.714837 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2daa98ea-b766-495c-a3e7-5d73232ddc18-serviceca\") pod \"image-pruner-29525760-qdjzk\" (UID: \"2daa98ea-b766-495c-a3e7-5d73232ddc18\") " pod="openshift-image-registry/image-pruner-29525760-qdjzk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.714900 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e309235a-31b7-4789-9ba3-5839cab177a6-images\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.714929 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-audit-policies\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.714971 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-audit-dir\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715005 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-config\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715045 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-encryption-config\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715076 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-etcd-serving-ca\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715117 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjqnl\" (UniqueName: \"kubernetes.io/projected/4887f9da-b32d-4096-9491-d1368c94dfdc-kube-api-access-jjqnl\") pod \"openshift-apiserver-operator-846cbfc458-672k9\" (UID: \"4887f9da-b32d-4096-9491-d1368c94dfdc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715148 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-audit-dir\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715179 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715226 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbgd\" (UniqueName: \"kubernetes.io/projected/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-kube-api-access-scbgd\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715266 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-node-pullsecrets\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715296 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-encryption-config\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715339 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715369 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-etcd-client\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715400 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4887f9da-b32d-4096-9491-d1368c94dfdc-config\") pod \"openshift-apiserver-operator-846cbfc458-672k9\" (UID: \"4887f9da-b32d-4096-9491-d1368c94dfdc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715427 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef1a991-20ef-447f-94d6-c18c5f875ae8-serving-cert\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715457 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715488 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-serving-cert\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715518 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-etcd-client\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715578 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fef1a991-20ef-447f-94d6-c18c5f875ae8-tmp\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715617 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4887f9da-b32d-4096-9491-d1368c94dfdc-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-672k9\" (UID: \"4887f9da-b32d-4096-9491-d1368c94dfdc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715649 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknch\" (UniqueName: \"kubernetes.io/projected/fef1a991-20ef-447f-94d6-c18c5f875ae8-kube-api-access-wknch\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715680 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf8dk\" (UniqueName: \"kubernetes.io/projected/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-kube-api-access-nf8dk\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715715 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e309235a-31b7-4789-9ba3-5839cab177a6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715760 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e309235a-31b7-4789-9ba3-5839cab177a6-config\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715790 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-audit\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.715837 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98lwk\" (UniqueName: \"kubernetes.io/projected/2daa98ea-b766-495c-a3e7-5d73232ddc18-kube-api-access-98lwk\") pod \"image-pruner-29525760-qdjzk\" (UID: \"2daa98ea-b766-495c-a3e7-5d73232ddc18\") " pod="openshift-image-registry/image-pruner-29525760-qdjzk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.721440 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-djfdx"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.721637 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.721707 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.727616 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.727978 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.728155 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.728252 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.728419 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.729939 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-s6ttl"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.730171 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.733112 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.733316 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.733511 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.733689 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.733820 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.733953 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.734404 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.734499 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.734959 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.735023 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.735342 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.741733 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.743537 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-h4nz2"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.743827 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.744417 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.744819 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.744944 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.745374 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.745592 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.746881 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.750174 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-gx2g8"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.750323 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.750712 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.750964 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.751120 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.751323 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.751470 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.751619 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.755806 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.757677 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.758138 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.757876 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.767119 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.767281 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.767765 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.767821 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.767953 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.767993 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.768227 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.768912 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.775786 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.776218 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.782717 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.783096 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.784177 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.784563 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.785148 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.786569 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-xk7zk"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.787699 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.795835 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.806434 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.815673 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.816952 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-xk7zk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.817199 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-machine-approver-tls\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.818323 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832111 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832418 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832517 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-98lwk\" (UniqueName: \"kubernetes.io/projected/2daa98ea-b766-495c-a3e7-5d73232ddc18-kube-api-access-98lwk\") pod \"image-pruner-29525760-qdjzk\" (UID: \"2daa98ea-b766-495c-a3e7-5d73232ddc18\") " pod="openshift-image-registry/image-pruner-29525760-qdjzk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832576 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832598 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832622 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-config\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832650 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-image-import-ca\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832699 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-serving-cert\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832735 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-tmp\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832771 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-client-ca\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832798 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-config\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832825 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj8vd\" (UniqueName: \"kubernetes.io/projected/017eb489-ceff-4140-a56b-f631b7fab529-kube-api-access-vj8vd\") pod \"cluster-samples-operator-6b564684c8-wqzpb\" (UID: \"017eb489-ceff-4140-a56b-f631b7fab529\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832854 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ca936b9-5497-4d4a-9927-6a7742282c62-config\") pod \"service-ca-operator-5b9c976747-xklbn\" (UID: \"0ca936b9-5497-4d4a-9927-6a7742282c62\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832894 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7fghs\" (UniqueName: \"kubernetes.io/projected/e309235a-31b7-4789-9ba3-5839cab177a6-kube-api-access-7fghs\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832926 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2daa98ea-b766-495c-a3e7-5d73232ddc18-serviceca\") pod \"image-pruner-29525760-qdjzk\" (UID: \"2daa98ea-b766-495c-a3e7-5d73232ddc18\") " pod="openshift-image-registry/image-pruner-29525760-qdjzk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832955 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-client-ca\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.832980 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-auth-proxy-config\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833032 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e309235a-31b7-4789-9ba3-5839cab177a6-images\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833057 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-audit-policies\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833100 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833162 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-audit-dir\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833179 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833204 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-config\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833236 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-config\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833269 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-encryption-config\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833285 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5892l\" (UniqueName: \"kubernetes.io/projected/0ca936b9-5497-4d4a-9927-6a7742282c62-kube-api-access-5892l\") pod \"service-ca-operator-5b9c976747-xklbn\" (UID: \"0ca936b9-5497-4d4a-9927-6a7742282c62\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833305 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-etcd-serving-ca\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833321 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-audit-policies\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833361 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jjqnl\" (UniqueName: \"kubernetes.io/projected/4887f9da-b32d-4096-9491-d1368c94dfdc-kube-api-access-jjqnl\") pod \"openshift-apiserver-operator-846cbfc458-672k9\" (UID: \"4887f9da-b32d-4096-9491-d1368c94dfdc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833386 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-audit-dir\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833407 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833424 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833450 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/017eb489-ceff-4140-a56b-f631b7fab529-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-wqzpb\" (UID: \"017eb489-ceff-4140-a56b-f631b7fab529\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833466 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/863693a2-ad89-4568-abd2-d96f7f9db45e-config-volume\") pod \"collect-profiles-29525760-fwrs2\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833485 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833504 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/545f05c6-70cf-4ce3-ad50-ad4e264680ca-serving-cert\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833525 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-scbgd\" (UniqueName: \"kubernetes.io/projected/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-kube-api-access-scbgd\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833564 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833595 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/da030a07-1d86-4b99-81e4-94c7ee3b5d58-available-featuregates\") pod \"openshift-config-operator-5777786469-s6ttl\" (UID: \"da030a07-1d86-4b99-81e4-94c7ee3b5d58\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833617 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/545f05c6-70cf-4ce3-ad50-ad4e264680ca-tmp\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833676 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-node-pullsecrets\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833702 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-encryption-config\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833743 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833778 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da030a07-1d86-4b99-81e4-94c7ee3b5d58-serving-cert\") pod \"openshift-config-operator-5777786469-s6ttl\" (UID: \"da030a07-1d86-4b99-81e4-94c7ee3b5d58\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833805 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/425ed829-90c3-45f1-b162-107726016bfd-audit-dir\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833828 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833843 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-image-import-ca\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.833855 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834634 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834722 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-etcd-client\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834750 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zptmf\" (UniqueName: \"kubernetes.io/projected/863693a2-ad89-4568-abd2-d96f7f9db45e-kube-api-access-zptmf\") pod \"collect-profiles-29525760-fwrs2\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834771 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmsp6\" (UniqueName: \"kubernetes.io/projected/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-kube-api-access-qmsp6\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834797 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4887f9da-b32d-4096-9491-d1368c94dfdc-config\") pod \"openshift-apiserver-operator-846cbfc458-672k9\" (UID: \"4887f9da-b32d-4096-9491-d1368c94dfdc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834817 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef1a991-20ef-447f-94d6-c18c5f875ae8-serving-cert\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834838 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834868 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834894 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-serving-cert\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834910 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pcls\" (UniqueName: \"kubernetes.io/projected/425ed829-90c3-45f1-b162-107726016bfd-kube-api-access-4pcls\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834925 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9zhx\" (UniqueName: \"kubernetes.io/projected/545f05c6-70cf-4ce3-ad50-ad4e264680ca-kube-api-access-f9zhx\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834944 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f2mf\" (UniqueName: \"kubernetes.io/projected/da030a07-1d86-4b99-81e4-94c7ee3b5d58-kube-api-access-6f2mf\") pod \"openshift-config-operator-5777786469-s6ttl\" (UID: \"da030a07-1d86-4b99-81e4-94c7ee3b5d58\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834961 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.834984 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-etcd-client\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835002 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835021 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fef1a991-20ef-447f-94d6-c18c5f875ae8-tmp\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835039 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc2ns\" (UniqueName: \"kubernetes.io/projected/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-kube-api-access-rc2ns\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835059 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835086 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4887f9da-b32d-4096-9491-d1368c94dfdc-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-672k9\" (UID: \"4887f9da-b32d-4096-9491-d1368c94dfdc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835103 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wknch\" (UniqueName: \"kubernetes.io/projected/fef1a991-20ef-447f-94d6-c18c5f875ae8-kube-api-access-wknch\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835120 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nf8dk\" (UniqueName: \"kubernetes.io/projected/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-kube-api-access-nf8dk\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835264 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e309235a-31b7-4789-9ba3-5839cab177a6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835285 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ca936b9-5497-4d4a-9927-6a7742282c62-serving-cert\") pod \"service-ca-operator-5b9c976747-xklbn\" (UID: \"0ca936b9-5497-4d4a-9927-6a7742282c62\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835302 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835335 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e309235a-31b7-4789-9ba3-5839cab177a6-config\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835352 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-audit\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.835369 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/863693a2-ad89-4568-abd2-d96f7f9db45e-secret-volume\") pod \"collect-profiles-29525760-fwrs2\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.836135 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-etcd-serving-ca\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.836200 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-node-pullsecrets\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.836205 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.836389 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.836511 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.836642 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.836842 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.837091 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.837173 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-client-ca\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.837255 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.837340 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.837527 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.837701 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.838321 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.838981 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.839478 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fef1a991-20ef-447f-94d6-c18c5f875ae8-tmp\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.840347 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-config\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.841099 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2daa98ea-b766-495c-a3e7-5d73232ddc18-serviceca\") pod \"image-pruner-29525760-qdjzk\" (UID: \"2daa98ea-b766-495c-a3e7-5d73232ddc18\") " pod="openshift-image-registry/image-pruner-29525760-qdjzk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.842418 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-audit-dir\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.842818 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e309235a-31b7-4789-9ba3-5839cab177a6-images\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.842819 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-audit-policies\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.842893 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-audit-dir\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.842983 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.843368 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-etcd-client\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.843480 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-config\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.843975 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4887f9da-b32d-4096-9491-d1368c94dfdc-config\") pod \"openshift-apiserver-operator-846cbfc458-672k9\" (UID: \"4887f9da-b32d-4096-9491-d1368c94dfdc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.844483 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.844516 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.844718 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4887f9da-b32d-4096-9491-d1368c94dfdc-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-672k9\" (UID: \"4887f9da-b32d-4096-9491-d1368c94dfdc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.844765 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.845471 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e309235a-31b7-4789-9ba3-5839cab177a6-config\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.845928 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-serving-cert\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.848668 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.848809 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-audit\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.849050 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-g2szw"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.849200 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.850006 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-etcd-client\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.850284 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-encryption-config\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.850432 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.850444 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.852214 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.853235 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.855822 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-encryption-config\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.856771 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-serving-cert\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.857671 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.860112 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.860316 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.865518 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.865604 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.865653 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e309235a-31b7-4789-9ba3-5839cab177a6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.865933 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.869028 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.869890 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.879843 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.881756 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.884584 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.886388 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.886666 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef1a991-20ef-447f-94d6-c18c5f875ae8-serving-cert\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.889061 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-gmfcd"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.889329 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.892109 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-scbgd\" (UniqueName: \"kubernetes.io/projected/be18e81d-0071-4e9b-a38a-a2ad3052c0bc-kube-api-access-scbgd\") pod \"apiserver-8596bd845d-jnprp\" (UID: \"be18e81d-0071-4e9b-a38a-a2ad3052c0bc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.892171 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fghs\" (UniqueName: \"kubernetes.io/projected/e309235a-31b7-4789-9ba3-5839cab177a6-kube-api-access-7fghs\") pod \"machine-api-operator-755bb95488-psrg4\" (UID: \"e309235a-31b7-4789-9ba3-5839cab177a6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.892264 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wknch\" (UniqueName: \"kubernetes.io/projected/fef1a991-20ef-447f-94d6-c18c5f875ae8-kube-api-access-wknch\") pod \"route-controller-manager-776cdc94d6-fmgjr\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.892628 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf8dk\" (UniqueName: \"kubernetes.io/projected/bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead-kube-api-access-nf8dk\") pod \"apiserver-9ddfb9f55-8qbnc\" (UID: \"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead\") " pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.894187 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-98lwk\" (UniqueName: \"kubernetes.io/projected/2daa98ea-b766-495c-a3e7-5d73232ddc18-kube-api-access-98lwk\") pod \"image-pruner-29525760-qdjzk\" (UID: \"2daa98ea-b766-495c-a3e7-5d73232ddc18\") " pod="openshift-image-registry/image-pruner-29525760-qdjzk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.894243 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.894487 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjqnl\" (UniqueName: \"kubernetes.io/projected/4887f9da-b32d-4096-9491-d1368c94dfdc-kube-api-access-jjqnl\") pod \"openshift-apiserver-operator-846cbfc458-672k9\" (UID: \"4887f9da-b32d-4096-9491-d1368c94dfdc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.894919 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.897521 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.908890 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.911622 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.918681 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.919340 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.921497 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.922357 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.924353 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.924887 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.925142 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.927282 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.927418 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.930551 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.930891 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.935252 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.935420 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.935978 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ca936b9-5497-4d4a-9927-6a7742282c62-serving-cert\") pod \"service-ca-operator-5b9c976747-xklbn\" (UID: \"0ca936b9-5497-4d4a-9927-6a7742282c62\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936011 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936152 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/863693a2-ad89-4568-abd2-d96f7f9db45e-secret-volume\") pod \"collect-profiles-29525760-fwrs2\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936188 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62kl9\" (UniqueName: \"kubernetes.io/projected/8eb821d0-6b6b-422d-bbc5-8394778f57cd-kube-api-access-62kl9\") pod \"multus-admission-controller-69db94689b-g2szw\" (UID: \"8eb821d0-6b6b-422d-bbc5-8394778f57cd\") " pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936210 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/30f80be9-b4ee-419d-82c1-0341637259d2-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-8f5xb\" (UID: \"30f80be9-b4ee-419d-82c1-0341637259d2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936233 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30f80be9-b4ee-419d-82c1-0341637259d2-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-8f5xb\" (UID: \"30f80be9-b4ee-419d-82c1-0341637259d2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936260 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-machine-approver-tls\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936282 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936326 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e68c79d-159b-48a1-ba29-0ecebeeaf581-images\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936435 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-tmpfs\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936489 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936516 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936534 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdts\" (UniqueName: \"kubernetes.io/projected/05c29cb3-f6e5-40f4-8780-0a98b0c52330-kube-api-access-zrdts\") pod \"migrator-866fcbc849-fvnk8\" (UID: \"05c29cb3-f6e5-40f4-8780-0a98b0c52330\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936563 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/780bd316-cfcd-4cf9-b0b9-291cccd331fb-serving-cert\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936587 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-config\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936605 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpjcw\" (UniqueName: \"kubernetes.io/projected/2c8310dc-cc1c-4c06-a86e-4147e8b11a7c-kube-api-access-zpjcw\") pod \"service-ca-74545575db-xk7zk\" (UID: \"2c8310dc-cc1c-4c06-a86e-4147e8b11a7c\") " pod="openshift-service-ca/service-ca-74545575db-xk7zk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936625 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/518958b8-c3df-4db5-87c9-6528f05e4bfe-srv-cert\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936643 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxgx\" (UniqueName: \"kubernetes.io/projected/54acf8be-ab9f-4e85-8394-dfafbf121b67-kube-api-access-prxgx\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k6wjq\" (UID: \"54acf8be-ab9f-4e85-8394-dfafbf121b67\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936659 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3a805538-2592-40cf-9131-444b1c0f3cbb-stats-auth\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936681 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-tmp\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936697 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfkmz\" (UniqueName: \"kubernetes.io/projected/30f80be9-b4ee-419d-82c1-0341637259d2-kube-api-access-tfkmz\") pod \"machine-config-controller-f9cdd68f7-8f5xb\" (UID: \"30f80be9-b4ee-419d-82c1-0341637259d2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936718 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2929140b-fc72-423b-a599-6893f45f258b-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936732 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-tmpfs\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936751 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e68c79d-159b-48a1-ba29-0ecebeeaf581-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936768 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwz5h\" (UniqueName: \"kubernetes.io/projected/1a8aed0e-0d53-4db2-ac89-b0c953e8fc36-kube-api-access-bwz5h\") pod \"package-server-manager-77f986bd66-5vrqr\" (UID: \"1a8aed0e-0d53-4db2-ac89-b0c953e8fc36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936788 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vj8vd\" (UniqueName: \"kubernetes.io/projected/017eb489-ceff-4140-a56b-f631b7fab529-kube-api-access-vj8vd\") pod \"cluster-samples-operator-6b564684c8-wqzpb\" (UID: \"017eb489-ceff-4140-a56b-f631b7fab529\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936808 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ca936b9-5497-4d4a-9927-6a7742282c62-config\") pod \"service-ca-operator-5b9c976747-xklbn\" (UID: \"0ca936b9-5497-4d4a-9927-6a7742282c62\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936823 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2c8310dc-cc1c-4c06-a86e-4147e8b11a7c-signing-cabundle\") pod \"service-ca-74545575db-xk7zk\" (UID: \"2c8310dc-cc1c-4c06-a86e-4147e8b11a7c\") " pod="openshift-service-ca/service-ca-74545575db-xk7zk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936839 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3a805538-2592-40cf-9131-444b1c0f3cbb-default-certificate\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936861 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-client-ca\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936876 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-auth-proxy-config\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936896 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/780bd316-cfcd-4cf9-b0b9-291cccd331fb-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936914 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a8aed0e-0d53-4db2-ac89-b0c953e8fc36-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-5vrqr\" (UID: \"1a8aed0e-0d53-4db2-ac89-b0c953e8fc36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936939 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.936957 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz9r7\" (UniqueName: \"kubernetes.io/projected/518958b8-c3df-4db5-87c9-6528f05e4bfe-kube-api-access-hz9r7\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937029 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937055 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-config\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937096 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-config\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937115 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-srv-cert\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937143 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5892l\" (UniqueName: \"kubernetes.io/projected/0ca936b9-5497-4d4a-9927-6a7742282c62-kube-api-access-5892l\") pod \"service-ca-operator-5b9c976747-xklbn\" (UID: \"0ca936b9-5497-4d4a-9927-6a7742282c62\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937161 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e68c79d-159b-48a1-ba29-0ecebeeaf581-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937177 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a805538-2592-40cf-9131-444b1c0f3cbb-metrics-certs\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937194 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-webhook-cert\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937216 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-audit-policies\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937235 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-apiservice-cert\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937258 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937276 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/780bd316-cfcd-4cf9-b0b9-291cccd331fb-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937318 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eb821d0-6b6b-422d-bbc5-8394778f57cd-webhook-certs\") pod \"multus-admission-controller-69db94689b-g2szw\" (UID: \"8eb821d0-6b6b-422d-bbc5-8394778f57cd\") " pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937337 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sqnw\" (UniqueName: \"kubernetes.io/projected/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-kube-api-access-8sqnw\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937358 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937374 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/518958b8-c3df-4db5-87c9-6528f05e4bfe-tmpfs\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937399 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/017eb489-ceff-4140-a56b-f631b7fab529-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-wqzpb\" (UID: \"017eb489-ceff-4140-a56b-f631b7fab529\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937415 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/863693a2-ad89-4568-abd2-d96f7f9db45e-config-volume\") pod \"collect-profiles-29525760-fwrs2\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937432 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937448 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/545f05c6-70cf-4ce3-ad50-ad4e264680ca-serving-cert\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937464 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937486 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937502 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/780bd316-cfcd-4cf9-b0b9-291cccd331fb-config\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937519 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqqlg\" (UniqueName: \"kubernetes.io/projected/8e68c79d-159b-48a1-ba29-0ecebeeaf581-kube-api-access-gqqlg\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937533 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a805538-2592-40cf-9131-444b1c0f3cbb-service-ca-bundle\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937570 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/da030a07-1d86-4b99-81e4-94c7ee3b5d58-available-featuregates\") pod \"openshift-config-operator-5777786469-s6ttl\" (UID: \"da030a07-1d86-4b99-81e4-94c7ee3b5d58\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937588 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/545f05c6-70cf-4ce3-ad50-ad4e264680ca-tmp\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937622 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2929140b-fc72-423b-a599-6893f45f258b-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937638 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjlnd\" (UniqueName: \"kubernetes.io/projected/3a805538-2592-40cf-9131-444b1c0f3cbb-kube-api-access-rjlnd\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937669 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937687 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da030a07-1d86-4b99-81e4-94c7ee3b5d58-serving-cert\") pod \"openshift-config-operator-5777786469-s6ttl\" (UID: \"da030a07-1d86-4b99-81e4-94c7ee3b5d58\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937703 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/425ed829-90c3-45f1-b162-107726016bfd-audit-dir\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937720 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937742 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zptmf\" (UniqueName: \"kubernetes.io/projected/863693a2-ad89-4568-abd2-d96f7f9db45e-kube-api-access-zptmf\") pod \"collect-profiles-29525760-fwrs2\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937760 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qmsp6\" (UniqueName: \"kubernetes.io/projected/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-kube-api-access-qmsp6\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937775 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgkb9\" (UniqueName: \"kubernetes.io/projected/780bd316-cfcd-4cf9-b0b9-291cccd331fb-kube-api-access-bgkb9\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937792 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/518958b8-c3df-4db5-87c9-6528f05e4bfe-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937848 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937871 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4pcls\" (UniqueName: \"kubernetes.io/projected/425ed829-90c3-45f1-b162-107726016bfd-kube-api-access-4pcls\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937910 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9zhx\" (UniqueName: \"kubernetes.io/projected/545f05c6-70cf-4ce3-ad50-ad4e264680ca-kube-api-access-f9zhx\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937933 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2929140b-fc72-423b-a599-6893f45f258b-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937950 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6f2mf\" (UniqueName: \"kubernetes.io/projected/da030a07-1d86-4b99-81e4-94c7ee3b5d58-kube-api-access-6f2mf\") pod \"openshift-config-operator-5777786469-s6ttl\" (UID: \"da030a07-1d86-4b99-81e4-94c7ee3b5d58\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.937988 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.938005 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2c8310dc-cc1c-4c06-a86e-4147e8b11a7c-signing-key\") pod \"service-ca-74545575db-xk7zk\" (UID: \"2c8310dc-cc1c-4c06-a86e-4147e8b11a7c\") " pod="openshift-service-ca/service-ca-74545575db-xk7zk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.938029 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.938047 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nglnc"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.938067 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2929140b-fc72-423b-a599-6893f45f258b-config\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.938085 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.938106 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rc2ns\" (UniqueName: \"kubernetes.io/projected/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-kube-api-access-rc2ns\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.938185 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.938208 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/54acf8be-ab9f-4e85-8394-dfafbf121b67-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k6wjq\" (UID: \"54acf8be-ab9f-4e85-8394-dfafbf121b67\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.938248 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svn2b\" (UniqueName: \"kubernetes.io/projected/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-kube-api-access-svn2b\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.938266 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-config\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.938350 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.939525 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ca936b9-5497-4d4a-9927-6a7742282c62-serving-cert\") pod \"service-ca-operator-5b9c976747-xklbn\" (UID: \"0ca936b9-5497-4d4a-9927-6a7742282c62\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.939615 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/863693a2-ad89-4568-abd2-d96f7f9db45e-secret-volume\") pod \"collect-profiles-29525760-fwrs2\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.939791 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.939845 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.939859 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-tmp\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.940313 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/da030a07-1d86-4b99-81e4-94c7ee3b5d58-available-featuregates\") pod \"openshift-config-operator-5777786469-s6ttl\" (UID: \"da030a07-1d86-4b99-81e4-94c7ee3b5d58\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.940732 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/545f05c6-70cf-4ce3-ad50-ad4e264680ca-tmp\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.940981 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ca936b9-5497-4d4a-9927-6a7742282c62-config\") pod \"service-ca-operator-5b9c976747-xklbn\" (UID: \"0ca936b9-5497-4d4a-9927-6a7742282c62\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.941341 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-audit-policies\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.941532 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/863693a2-ad89-4568-abd2-d96f7f9db45e-config-volume\") pod \"collect-profiles-29525760-fwrs2\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.942186 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-config\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.942142 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.942446 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.942712 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-auth-proxy-config\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.942766 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/425ed829-90c3-45f1-b162-107726016bfd-audit-dir\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.942928 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-machine-approver-tls\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.943053 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.943589 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.943948 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-client-ca\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.944146 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.944258 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.944935 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/545f05c6-70cf-4ce3-ad50-ad4e264680ca-serving-cert\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.945447 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.945945 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-vz6mj"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.946403 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.947052 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.947474 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nglnc" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.947744 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da030a07-1d86-4b99-81e4-94c7ee3b5d58-serving-cert\") pod \"openshift-config-operator-5777786469-s6ttl\" (UID: \"da030a07-1d86-4b99-81e4-94c7ee3b5d58\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.948262 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.948941 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/017eb489-ceff-4140-a56b-f631b7fab529-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-wqzpb\" (UID: \"017eb489-ceff-4140-a56b-f631b7fab529\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.951165 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.951210 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.953592 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.958588 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29525760-qdjzk"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.958633 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-599mk"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.958853 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.966897 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.969449 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.969482 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qg8t8"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.970368 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.976410 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-8mw6h"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.976617 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.979644 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.979771 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-8mw6h" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.982793 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qh5n2"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.982925 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.988319 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.988375 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-xk7zk"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.988391 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.988432 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.988450 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jnprp"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.988461 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-s6ttl"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.988472 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.988483 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-whcrz"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.988392 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.991334 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-8qbnc"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.991360 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.991549 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.994200 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.994789 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.998059 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.998084 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4mff9"] Feb 20 00:11:56 crc kubenswrapper[5119]: I0220 00:11:56.998299 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.001820 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.001842 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-psrg4"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.001854 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.001882 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.001891 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.001900 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-vz6mj"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.001909 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.001918 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.001927 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.001957 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8dmqk"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.002010 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.005302 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7c2z6"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.005474 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8dmqk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.005533 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008360 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008382 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-g2szw"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008393 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4mff9"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008406 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-whcrz"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008471 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-45lmz"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008485 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008515 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-h4nz2"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008526 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008551 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008553 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008561 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008667 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qg8t8"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008678 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008688 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-gx2g8"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008697 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008707 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-599mk"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008717 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008727 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8dmqk"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008739 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-8mw6h"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008749 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qh5n2"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.008760 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-sn42f"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.013850 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29525760-qdjzk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.014109 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-sn42f"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.014196 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-sn42f" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.020118 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.026787 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.034262 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.039501 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-oauth-config\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.039777 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-62kl9\" (UniqueName: \"kubernetes.io/projected/8eb821d0-6b6b-422d-bbc5-8394778f57cd-kube-api-access-62kl9\") pod \"multus-admission-controller-69db94689b-g2szw\" (UID: \"8eb821d0-6b6b-422d-bbc5-8394778f57cd\") " pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.039820 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/30f80be9-b4ee-419d-82c1-0341637259d2-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-8f5xb\" (UID: \"30f80be9-b4ee-419d-82c1-0341637259d2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.039842 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/780bd316-cfcd-4cf9-b0b9-291cccd331fb-serving-cert\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.039864 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwz5h\" (UniqueName: \"kubernetes.io/projected/1a8aed0e-0d53-4db2-ac89-b0c953e8fc36-kube-api-access-bwz5h\") pod \"package-server-manager-77f986bd66-5vrqr\" (UID: \"1a8aed0e-0d53-4db2-ac89-b0c953e8fc36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.039889 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/d17d486a-64e1-4f70-ba16-90bdc93550e7-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.039914 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-oauth-serving-cert\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.039936 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3a805538-2592-40cf-9131-444b1c0f3cbb-stats-auth\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.039958 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/518958b8-c3df-4db5-87c9-6528f05e4bfe-srv-cert\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.039982 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-prxgx\" (UniqueName: \"kubernetes.io/projected/54acf8be-ab9f-4e85-8394-dfafbf121b67-kube-api-access-prxgx\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k6wjq\" (UID: \"54acf8be-ab9f-4e85-8394-dfafbf121b67\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040004 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-tmpfs\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040024 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3a805538-2592-40cf-9131-444b1c0f3cbb-default-certificate\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040044 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2c8310dc-cc1c-4c06-a86e-4147e8b11a7c-signing-cabundle\") pod \"service-ca-74545575db-xk7zk\" (UID: \"2c8310dc-cc1c-4c06-a86e-4147e8b11a7c\") " pod="openshift-service-ca/service-ca-74545575db-xk7zk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040081 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-service-ca\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040101 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/830cfba4-378e-4f83-a28a-add80c1cb7e0-tmp-dir\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040123 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hz9r7\" (UniqueName: \"kubernetes.io/projected/518958b8-c3df-4db5-87c9-6528f05e4bfe-kube-api-access-hz9r7\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040167 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e68c79d-159b-48a1-ba29-0ecebeeaf581-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040187 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-kube-api-access\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040203 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-tmp-dir\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040222 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-apiservice-cert\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040240 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/780bd316-cfcd-4cf9-b0b9-291cccd331fb-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040261 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e68c79d-159b-48a1-ba29-0ecebeeaf581-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040280 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-serving-cert\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040302 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eb821d0-6b6b-422d-bbc5-8394778f57cd-webhook-certs\") pod \"multus-admission-controller-69db94689b-g2szw\" (UID: \"8eb821d0-6b6b-422d-bbc5-8394778f57cd\") " pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040320 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8sqnw\" (UniqueName: \"kubernetes.io/projected/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-kube-api-access-8sqnw\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040339 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/518958b8-c3df-4db5-87c9-6528f05e4bfe-tmpfs\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040380 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gqqlg\" (UniqueName: \"kubernetes.io/projected/8e68c79d-159b-48a1-ba29-0ecebeeaf581-kube-api-access-gqqlg\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040400 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a805538-2592-40cf-9131-444b1c0f3cbb-service-ca-bundle\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040422 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rjlnd\" (UniqueName: \"kubernetes.io/projected/3a805538-2592-40cf-9131-444b1c0f3cbb-kube-api-access-rjlnd\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040446 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9g54\" (UniqueName: \"kubernetes.io/projected/d17d486a-64e1-4f70-ba16-90bdc93550e7-kube-api-access-x9g54\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040560 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2929140b-fc72-423b-a599-6893f45f258b-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040602 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-serving-cert\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040862 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d17d486a-64e1-4f70-ba16-90bdc93550e7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.040940 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30f80be9-b4ee-419d-82c1-0341637259d2-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-8f5xb\" (UID: \"30f80be9-b4ee-419d-82c1-0341637259d2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.041033 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.041072 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e68c79d-159b-48a1-ba29-0ecebeeaf581-images\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.041098 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-ca\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.041304 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-tmpfs\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.041429 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/518958b8-c3df-4db5-87c9-6528f05e4bfe-tmpfs\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.041951 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e68c79d-159b-48a1-ba29-0ecebeeaf581-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.042477 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.042517 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-tmpfs\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.042760 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30f80be9-b4ee-419d-82c1-0341637259d2-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-8f5xb\" (UID: \"30f80be9-b4ee-419d-82c1-0341637259d2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.042775 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdts\" (UniqueName: \"kubernetes.io/projected/05c29cb3-f6e5-40f4-8780-0a98b0c52330-kube-api-access-zrdts\") pod \"migrator-866fcbc849-fvnk8\" (UID: \"05c29cb3-f6e5-40f4-8780-0a98b0c52330\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043076 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tfkmz\" (UniqueName: \"kubernetes.io/projected/30f80be9-b4ee-419d-82c1-0341637259d2-kube-api-access-tfkmz\") pod \"machine-config-controller-f9cdd68f7-8f5xb\" (UID: \"30f80be9-b4ee-419d-82c1-0341637259d2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043212 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2929140b-fc72-423b-a599-6893f45f258b-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043231 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e68c79d-159b-48a1-ba29-0ecebeeaf581-images\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043250 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bplr5\" (UniqueName: \"kubernetes.io/projected/830cfba4-378e-4f83-a28a-add80c1cb7e0-kube-api-access-bplr5\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043282 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-client\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043218 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-tmpfs\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043407 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5vz8\" (UniqueName: \"kubernetes.io/projected/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-kube-api-access-j5vz8\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043502 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a8aed0e-0d53-4db2-ac89-b0c953e8fc36-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-5vrqr\" (UID: \"1a8aed0e-0d53-4db2-ac89-b0c953e8fc36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043526 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-serving-cert\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043596 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/780bd316-cfcd-4cf9-b0b9-291cccd331fb-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043714 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d17d486a-64e1-4f70-ba16-90bdc93550e7-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043749 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2929140b-fc72-423b-a599-6893f45f258b-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043794 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-config\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043853 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-srv-cert\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043954 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a805538-2592-40cf-9131-444b1c0f3cbb-metrics-certs\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.043994 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-webhook-cert\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044049 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044075 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-config\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044131 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044164 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/780bd316-cfcd-4cf9-b0b9-291cccd331fb-config\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044201 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-config\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044250 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d17d486a-64e1-4f70-ba16-90bdc93550e7-tmp\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044283 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2929140b-fc72-423b-a599-6893f45f258b-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044392 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgkb9\" (UniqueName: \"kubernetes.io/projected/780bd316-cfcd-4cf9-b0b9-291cccd331fb-kube-api-access-bgkb9\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044399 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2c8310dc-cc1c-4c06-a86e-4147e8b11a7c-signing-cabundle\") pod \"service-ca-74545575db-xk7zk\" (UID: \"2c8310dc-cc1c-4c06-a86e-4147e8b11a7c\") " pod="openshift-service-ca/service-ca-74545575db-xk7zk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044459 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/518958b8-c3df-4db5-87c9-6528f05e4bfe-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044486 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044505 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zpjcw\" (UniqueName: \"kubernetes.io/projected/2c8310dc-cc1c-4c06-a86e-4147e8b11a7c-kube-api-access-zpjcw\") pod \"service-ca-74545575db-xk7zk\" (UID: \"2c8310dc-cc1c-4c06-a86e-4147e8b11a7c\") " pod="openshift-service-ca/service-ca-74545575db-xk7zk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044532 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2c8310dc-cc1c-4c06-a86e-4147e8b11a7c-signing-key\") pod \"service-ca-74545575db-xk7zk\" (UID: \"2c8310dc-cc1c-4c06-a86e-4147e8b11a7c\") " pod="openshift-service-ca/service-ca-74545575db-xk7zk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044572 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-service-ca\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044591 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d17d486a-64e1-4f70-ba16-90bdc93550e7-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044616 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-trusted-ca-bundle\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044634 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-config\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044659 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2929140b-fc72-423b-a599-6893f45f258b-config\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044693 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/54acf8be-ab9f-4e85-8394-dfafbf121b67-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k6wjq\" (UID: \"54acf8be-ab9f-4e85-8394-dfafbf121b67\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.044715 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-svn2b\" (UniqueName: \"kubernetes.io/projected/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-kube-api-access-svn2b\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.045105 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.046095 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.046307 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eb821d0-6b6b-422d-bbc5-8394778f57cd-webhook-certs\") pod \"multus-admission-controller-69db94689b-g2szw\" (UID: \"8eb821d0-6b6b-422d-bbc5-8394778f57cd\") " pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.047879 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e68c79d-159b-48a1-ba29-0ecebeeaf581-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.050986 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/518958b8-c3df-4db5-87c9-6528f05e4bfe-srv-cert\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.052747 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.055200 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2c8310dc-cc1c-4c06-a86e-4147e8b11a7c-signing-key\") pod \"service-ca-74545575db-xk7zk\" (UID: \"2c8310dc-cc1c-4c06-a86e-4147e8b11a7c\") " pod="openshift-service-ca/service-ca-74545575db-xk7zk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.058553 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/518958b8-c3df-4db5-87c9-6528f05e4bfe-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.059255 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/54acf8be-ab9f-4e85-8394-dfafbf121b67-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k6wjq\" (UID: \"54acf8be-ab9f-4e85-8394-dfafbf121b67\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.059827 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.065441 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.065998 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.074911 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-config\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.085506 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.086203 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.106977 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.121685 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a8aed0e-0d53-4db2-ac89-b0c953e8fc36-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-5vrqr\" (UID: \"1a8aed0e-0d53-4db2-ac89-b0c953e8fc36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.126767 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146440 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-config\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146489 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-config\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146509 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d17d486a-64e1-4f70-ba16-90bdc93550e7-tmp\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146570 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-service-ca\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146587 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d17d486a-64e1-4f70-ba16-90bdc93550e7-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146602 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-trusted-ca-bundle\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146618 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-config\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146644 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-oauth-config\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146667 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/d17d486a-64e1-4f70-ba16-90bdc93550e7-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146681 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-oauth-serving-cert\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146709 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-service-ca\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146727 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/830cfba4-378e-4f83-a28a-add80c1cb7e0-tmp-dir\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146757 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-kube-api-access\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146773 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-tmp-dir\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146808 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-serving-cert\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146851 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x9g54\" (UniqueName: \"kubernetes.io/projected/d17d486a-64e1-4f70-ba16-90bdc93550e7-kube-api-access-x9g54\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.146877 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-serving-cert\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.147257 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/d17d486a-64e1-4f70-ba16-90bdc93550e7-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.147637 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d17d486a-64e1-4f70-ba16-90bdc93550e7-tmp\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.147766 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d17d486a-64e1-4f70-ba16-90bdc93550e7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.147842 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-ca\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.147901 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bplr5\" (UniqueName: \"kubernetes.io/projected/830cfba4-378e-4f83-a28a-add80c1cb7e0-kube-api-access-bplr5\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.147933 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-client\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.147968 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j5vz8\" (UniqueName: \"kubernetes.io/projected/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-kube-api-access-j5vz8\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.148001 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-serving-cert\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.148037 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d17d486a-64e1-4f70-ba16-90bdc93550e7-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.150995 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/830cfba4-378e-4f83-a28a-add80c1cb7e0-tmp-dir\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.151358 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-tmp-dir\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.151832 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.167967 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.185047 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.197219 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3a805538-2592-40cf-9131-444b1c0f3cbb-stats-auth\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.204745 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.225941 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.247849 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.257528 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3a805538-2592-40cf-9131-444b1c0f3cbb-default-certificate\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.262041 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29525760-qdjzk"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.269498 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.277975 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a805538-2592-40cf-9131-444b1c0f3cbb-metrics-certs\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:57 crc kubenswrapper[5119]: W0220 00:11:57.282388 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2daa98ea_b766_495c_a3e7_5d73232ddc18.slice/crio-bb5e188e3336074fdfb6dc9bd1759ec66e9c9e98907db8f100f4a18d29479017 WatchSource:0}: Error finding container bb5e188e3336074fdfb6dc9bd1759ec66e9c9e98907db8f100f4a18d29479017: Status 404 returned error can't find the container with id bb5e188e3336074fdfb6dc9bd1759ec66e9c9e98907db8f100f4a18d29479017 Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.286495 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.292589 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a805538-2592-40cf-9131-444b1c0f3cbb-service-ca-bundle\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.305641 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.344259 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.349728 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.349792 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-8qbnc"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.350186 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.363037 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2929140b-fc72-423b-a599-6893f45f258b-config\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.376129 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.388494 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.406939 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.407084 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.408128 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2929140b-fc72-423b-a599-6893f45f258b-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:57 crc kubenswrapper[5119]: W0220 00:11:57.417461 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfef1a991_20ef_447f_94d6_c18c5f875ae8.slice/crio-b7009c12a37cb663835cdbea3852e95f5cad150d82d8fe84bd1b86ff3dcedea3 WatchSource:0}: Error finding container b7009c12a37cb663835cdbea3852e95f5cad150d82d8fe84bd1b86ff3dcedea3: Status 404 returned error can't find the container with id b7009c12a37cb663835cdbea3852e95f5cad150d82d8fe84bd1b86ff3dcedea3 Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.417899 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-apiservice-cert\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.426114 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-webhook-cert\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.428684 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-psrg4"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.429776 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 20 00:11:57 crc kubenswrapper[5119]: W0220 00:11:57.437320 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode309235a_31b7_4789_9ba3_5839cab177a6.slice/crio-e88f33bcc4ce501955f16a76eda3bbc9b89841b9e4d3d0f8cb2ec5ad38a79c3d WatchSource:0}: Error finding container e88f33bcc4ce501955f16a76eda3bbc9b89841b9e4d3d0f8cb2ec5ad38a79c3d: Status 404 returned error can't find the container with id e88f33bcc4ce501955f16a76eda3bbc9b89841b9e4d3d0f8cb2ec5ad38a79c3d Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.447550 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.448993 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jnprp"] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.455723 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/30f80be9-b4ee-419d-82c1-0341637259d2-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-8f5xb\" (UID: \"30f80be9-b4ee-419d-82c1-0341637259d2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.466034 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 20 00:11:57 crc kubenswrapper[5119]: W0220 00:11:57.466648 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe18e81d_0071_4e9b_a38a_a2ad3052c0bc.slice/crio-30af6b952fba1e777b9b6280b1aec6a1977892c4a264a269e93aa774442ac8c8 WatchSource:0}: Error finding container 30af6b952fba1e777b9b6280b1aec6a1977892c4a264a269e93aa774442ac8c8: Status 404 returned error can't find the container with id 30af6b952fba1e777b9b6280b1aec6a1977892c4a264a269e93aa774442ac8c8 Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.477338 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" event={"ID":"fef1a991-20ef-447f-94d6-c18c5f875ae8","Type":"ContainerStarted","Data":"b7009c12a37cb663835cdbea3852e95f5cad150d82d8fe84bd1b86ff3dcedea3"} Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.479026 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-srv-cert\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.481130 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" event={"ID":"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead","Type":"ContainerStarted","Data":"38fe784a98d61b8fc59a3d497c9bb5d131eecc0b4453a75f50949d01a477b616"} Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.483328 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" event={"ID":"4887f9da-b32d-4096-9491-d1368c94dfdc","Type":"ContainerStarted","Data":"2d3c225c21c0e8df19dfac4d6739e4e572f0e657172c3b2a4da7b2c29bc1a4e1"} Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.484525 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.486600 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" event={"ID":"e309235a-31b7-4789-9ba3-5839cab177a6","Type":"ContainerStarted","Data":"e88f33bcc4ce501955f16a76eda3bbc9b89841b9e4d3d0f8cb2ec5ad38a79c3d"} Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.488012 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29525760-qdjzk" event={"ID":"2daa98ea-b766-495c-a3e7-5d73232ddc18","Type":"ContainerStarted","Data":"bb5e188e3336074fdfb6dc9bd1759ec66e9c9e98907db8f100f4a18d29479017"} Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.488814 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" event={"ID":"be18e81d-0071-4e9b-a38a-a2ad3052c0bc","Type":"ContainerStarted","Data":"30af6b952fba1e777b9b6280b1aec6a1977892c4a264a269e93aa774442ac8c8"} Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.506123 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.524710 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.536621 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/780bd316-cfcd-4cf9-b0b9-291cccd331fb-serving-cert\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.545630 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.554935 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/780bd316-cfcd-4cf9-b0b9-291cccd331fb-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.555882 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.555978 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.556029 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.556140 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556303 5119 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556367 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.556351231 +0000 UTC m=+115.535315543 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556463 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556482 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556498 5119 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556530 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.556520456 +0000 UTC m=+115.535484768 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556616 5119 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556712 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.55668731 +0000 UTC m=+115.535651602 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556807 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556825 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556839 5119 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.556876 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.556865235 +0000 UTC m=+115.535829527 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.572132 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.573348 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/780bd316-cfcd-4cf9-b0b9-291cccd331fb-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.585904 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.597236 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/780bd316-cfcd-4cf9-b0b9-291cccd331fb-config\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.605420 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.624964 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.661107 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.661669 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.66161993 +0000 UTC m=+115.640584222 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.662953 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.664940 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.672267 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d17d486a-64e1-4f70-ba16-90bdc93550e7-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.674066 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d17d486a-64e1-4f70-ba16-90bdc93550e7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.702357 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zptmf\" (UniqueName: \"kubernetes.io/projected/863693a2-ad89-4568-abd2-d96f7f9db45e-kube-api-access-zptmf\") pod \"collect-profiles-29525760-fwrs2\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.722010 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f2mf\" (UniqueName: \"kubernetes.io/projected/da030a07-1d86-4b99-81e4-94c7ee3b5d58-kube-api-access-6f2mf\") pod \"openshift-config-operator-5777786469-s6ttl\" (UID: \"da030a07-1d86-4b99-81e4-94c7ee3b5d58\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.742252 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pcls\" (UniqueName: \"kubernetes.io/projected/425ed829-90c3-45f1-b162-107726016bfd-kube-api-access-4pcls\") pod \"oauth-openshift-66458b6674-h4nz2\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.760171 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc2ns\" (UniqueName: \"kubernetes.io/projected/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-kube-api-access-rc2ns\") pod \"marketplace-operator-547dbd544d-gx2g8\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.764138 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.764394 5119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:57 crc kubenswrapper[5119]: E0220 00:11:57.764569 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs podName:00a91a87-0ad1-4805-a686-42ea9dfa6bb9 nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.764517275 +0000 UTC m=+115.743481577 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs") pod "network-metrics-daemon-vnzx8" (UID: "00a91a87-0ad1-4805-a686-42ea9dfa6bb9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.783317 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9zhx\" (UniqueName: \"kubernetes.io/projected/545f05c6-70cf-4ce3-ad50-ad4e264680ca-kube-api-access-f9zhx\") pod \"controller-manager-65b6cccf98-45lmz\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.800090 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmsp6\" (UniqueName: \"kubernetes.io/projected/e420bb9a-54f6-426e-a975-a5fbd88ebcfe-kube-api-access-qmsp6\") pod \"machine-approver-54c688565-djfdx\" (UID: \"e420bb9a-54f6-426e-a975-a5fbd88ebcfe\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.818788 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5892l\" (UniqueName: \"kubernetes.io/projected/0ca936b9-5497-4d4a-9927-6a7742282c62-kube-api-access-5892l\") pod \"service-ca-operator-5b9c976747-xklbn\" (UID: \"0ca936b9-5497-4d4a-9927-6a7742282c62\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.819567 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.827885 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.836246 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.841342 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj8vd\" (UniqueName: \"kubernetes.io/projected/017eb489-ceff-4140-a56b-f631b7fab529-kube-api-access-vj8vd\") pod \"cluster-samples-operator-6b564684c8-wqzpb\" (UID: \"017eb489-ceff-4140-a56b-f631b7fab529\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.846013 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.855937 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.855972 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.856183 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.856403 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.858351 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.865598 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.871046 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.877556 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.890938 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.905784 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.927103 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.946793 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.963981 5119 request.go:752] "Waited before sending request" delay="1.015960055s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.972665 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 20 00:11:57 crc kubenswrapper[5119]: I0220 00:11:57.994135 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.005393 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.011100 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.032435 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.033391 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.045092 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.066327 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.069870 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-serving-cert\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.075115 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-oauth-config\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.100725 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-h4nz2"] Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.103095 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.104779 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.109435 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-trusted-ca-bundle\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.126694 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.128303 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-service-ca\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.145312 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.147559 5119 configmap.go:193] Couldn't get configMap openshift-console/console-config: failed to sync configmap cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.147610 5119 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.147831 5119 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.148013 5119 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.148162 5119 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.147675 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-config podName:5ae023e2-1168-43b9-83b4-49ff02bb9ea4 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:58.647648253 +0000 UTC m=+100.626612535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-config") pod "console-64d44f6ddf-vz6mj" (UID: "5ae023e2-1168-43b9-83b4-49ff02bb9ea4") : failed to sync configmap cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.148423 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-config podName:b1b64ff0-878d-4b3d-8b8f-67cf8ea08987 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:58.648333191 +0000 UTC m=+100.627297483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-config") pod "kube-apiserver-operator-575994946d-4q264" (UID: "b1b64ff0-878d-4b3d-8b8f-67cf8ea08987") : failed to sync configmap cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.148456 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-service-ca podName:830cfba4-378e-4f83-a28a-add80c1cb7e0 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:58.648448045 +0000 UTC m=+100.627412337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-service-ca") pod "etcd-operator-69b85846b6-599mk" (UID: "830cfba4-378e-4f83-a28a-add80c1cb7e0") : failed to sync configmap cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.148482 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-serving-cert podName:b1b64ff0-878d-4b3d-8b8f-67cf8ea08987 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:58.648471795 +0000 UTC m=+100.627436087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-serving-cert") pod "kube-apiserver-operator-575994946d-4q264" (UID: "b1b64ff0-878d-4b3d-8b8f-67cf8ea08987") : failed to sync secret cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.149920 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-oauth-serving-cert\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.150509 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-config podName:830cfba4-378e-4f83-a28a-add80c1cb7e0 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:58.650464098 +0000 UTC m=+100.629428400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-config") pod "etcd-operator-69b85846b6-599mk" (UID: "830cfba4-378e-4f83-a28a-add80c1cb7e0") : failed to sync configmap cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.151345 5119 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.151383 5119 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.151385 5119 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.151403 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-client podName:830cfba4-378e-4f83-a28a-add80c1cb7e0 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:58.651390163 +0000 UTC m=+100.630354455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-client") pod "etcd-operator-69b85846b6-599mk" (UID: "830cfba4-378e-4f83-a28a-add80c1cb7e0") : failed to sync secret cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.151433 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-serving-cert podName:830cfba4-378e-4f83-a28a-add80c1cb7e0 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:58.651414564 +0000 UTC m=+100.630378856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-serving-cert") pod "etcd-operator-69b85846b6-599mk" (UID: "830cfba4-378e-4f83-a28a-add80c1cb7e0") : failed to sync secret cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: E0220 00:11:58.151467 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-ca podName:830cfba4-378e-4f83-a28a-add80c1cb7e0 nodeName:}" failed. No retries permitted until 2026-02-20 00:11:58.651457845 +0000 UTC m=+100.630422137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-ca") pod "etcd-operator-69b85846b6-599mk" (UID: "830cfba4-378e-4f83-a28a-add80c1cb7e0") : failed to sync configmap cache: timed out waiting for the condition Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.164522 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.185461 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.206888 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.230499 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.247998 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.268898 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.278496 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-s6ttl"] Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.288382 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.310911 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.313082 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn"] Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.324900 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2"] Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.327785 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 20 00:11:58 crc kubenswrapper[5119]: W0220 00:11:58.345776 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod863693a2_ad89_4568_abd2_d96f7f9db45e.slice/crio-b426dc518f52e15da9f05b392806406d0bda9cfd93b3e09033d7c2cfb5afe36a WatchSource:0}: Error finding container b426dc518f52e15da9f05b392806406d0bda9cfd93b3e09033d7c2cfb5afe36a: Status 404 returned error can't find the container with id b426dc518f52e15da9f05b392806406d0bda9cfd93b3e09033d7c2cfb5afe36a Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.349802 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.354756 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-gx2g8"] Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.367098 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.387641 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.408924 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.425115 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.435313 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb"] Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.449689 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-45lmz"] Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.453093 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.466757 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.485426 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.495844 5119 generic.go:358] "Generic (PLEG): container finished" podID="be18e81d-0071-4e9b-a38a-a2ad3052c0bc" containerID="e3e964d16113783b6a4a7ca5b7a2ed8d48d8707a7fdedc893a5b926671b1f9d3" exitCode=0 Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.495975 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" event={"ID":"be18e81d-0071-4e9b-a38a-a2ad3052c0bc","Type":"ContainerDied","Data":"e3e964d16113783b6a4a7ca5b7a2ed8d48d8707a7fdedc893a5b926671b1f9d3"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.499214 5119 generic.go:358] "Generic (PLEG): container finished" podID="bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead" containerID="347a2a8f627970a59e35d0ec45f1189a881d53309af2c3d57c64124834783488" exitCode=0 Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.499276 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" event={"ID":"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead","Type":"ContainerDied","Data":"347a2a8f627970a59e35d0ec45f1189a881d53309af2c3d57c64124834783488"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.514121 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.514598 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" event={"ID":"da030a07-1d86-4b99-81e4-94c7ee3b5d58","Type":"ContainerStarted","Data":"2dee0497585394b74f70d06e7dba9aa67c6da35bd97e71f7142101cf6d6b5130"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.521020 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" event={"ID":"545f05c6-70cf-4ce3-ad50-ad4e264680ca","Type":"ContainerStarted","Data":"a7b7bd473cb193bca80a191efe0e16c9b59b84b74c6172fc91b308af392cfdd0"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.524453 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" event={"ID":"0ca936b9-5497-4d4a-9927-6a7742282c62","Type":"ContainerStarted","Data":"6ffbc58f602dcb1be554dbdbcb0a503488fa17ffd53e33034d81c26df5ae7c3e"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.524956 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.527779 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" event={"ID":"863693a2-ad89-4568-abd2-d96f7f9db45e","Type":"ContainerStarted","Data":"b426dc518f52e15da9f05b392806406d0bda9cfd93b3e09033d7c2cfb5afe36a"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.530712 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" event={"ID":"425ed829-90c3-45f1-b162-107726016bfd","Type":"ContainerStarted","Data":"319e884d9d15b35869d77785815835589c4b85560088fd241a6a116b0c952b0b"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.544515 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" event={"ID":"e420bb9a-54f6-426e-a975-a5fbd88ebcfe","Type":"ContainerStarted","Data":"0584d4c68bd54f40912533ff0cb4fb259f024a0aa2929c42401bd3a2c63fd5a0"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.544594 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" event={"ID":"e420bb9a-54f6-426e-a975-a5fbd88ebcfe","Type":"ContainerStarted","Data":"5eb97d565fd744d7e4613613d3c08f1ea5b69f37cf68d2b6dd7ede571ccc3adc"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.547526 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.556879 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" event={"ID":"e309235a-31b7-4789-9ba3-5839cab177a6","Type":"ContainerStarted","Data":"adab2f7f7470783c4d447a1c13226501ed0a4f5f125a09ac5636a00e7e467363"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.556939 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" event={"ID":"e309235a-31b7-4789-9ba3-5839cab177a6","Type":"ContainerStarted","Data":"1fe2b5d6c899e46aee93ce5c3a258bca9010514ce8f49eccbf6a6c0f0d1388be"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.566035 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29525760-qdjzk" event={"ID":"2daa98ea-b766-495c-a3e7-5d73232ddc18","Type":"ContainerStarted","Data":"498388d0e251e58c582bbef583c6d3fd706afee4c41803c47b2c9dbf351ab3be"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.571635 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" event={"ID":"cd98dcc8-4803-40df-a2d1-88f48b1e14b1","Type":"ContainerStarted","Data":"dea992bc73d22cb1a34194ab70354b0b51c051cfd1a5d5cf7340cce16f30af3c"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.583362 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" event={"ID":"fef1a991-20ef-447f-94d6-c18c5f875ae8","Type":"ContainerStarted","Data":"f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.584605 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.585023 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.605065 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.605694 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" event={"ID":"4887f9da-b32d-4096-9491-d1368c94dfdc","Type":"ContainerStarted","Data":"777467b8e867fce7c83ab782e211e2ed8e5a50aa8ac33ba09885e7dd7e253108"} Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.627973 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.644923 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.672346 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.682938 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-config\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.683003 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-config\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.683039 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-service-ca\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.683063 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-config\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.683127 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-serving-cert\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.683181 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-serving-cert\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.683226 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-ca\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.683266 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-client\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.684168 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-config\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.684912 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-config\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.685672 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-console-config\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.691209 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-client\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.691755 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-ca\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.694779 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.696336 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-serving-cert\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.698072 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/830cfba4-378e-4f83-a28a-add80c1cb7e0-serving-cert\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.705673 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.725698 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.734730 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/830cfba4-378e-4f83-a28a-add80c1cb7e0-etcd-service-ca\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.745742 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.766443 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.787077 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.804938 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.826131 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.851305 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.872946 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.889439 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.906192 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.928797 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.937277 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.945713 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.964228 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.982880 5119 request.go:752] "Waited before sending request" delay="1.977195879s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-kpvmz&limit=500&resourceVersion=0" Feb 20 00:11:58 crc kubenswrapper[5119]: I0220 00:11:58.986867 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.005281 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.025325 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.044996 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.064809 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.084262 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.106027 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.125504 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.185721 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-62kl9\" (UniqueName: \"kubernetes.io/projected/8eb821d0-6b6b-422d-bbc5-8394778f57cd-kube-api-access-62kl9\") pod \"multus-admission-controller-69db94689b-g2szw\" (UID: \"8eb821d0-6b6b-422d-bbc5-8394778f57cd\") " pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.218582 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwz5h\" (UniqueName: \"kubernetes.io/projected/1a8aed0e-0d53-4db2-ac89-b0c953e8fc36-kube-api-access-bwz5h\") pod \"package-server-manager-77f986bd66-5vrqr\" (UID: \"1a8aed0e-0d53-4db2-ac89-b0c953e8fc36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.223956 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz9r7\" (UniqueName: \"kubernetes.io/projected/518958b8-c3df-4db5-87c9-6528f05e4bfe-kube-api-access-hz9r7\") pod \"catalog-operator-75ff9f647d-6tsqj\" (UID: \"518958b8-c3df-4db5-87c9-6528f05e4bfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.246814 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxgx\" (UniqueName: \"kubernetes.io/projected/54acf8be-ab9f-4e85-8394-dfafbf121b67-kube-api-access-prxgx\") pod \"control-plane-machine-set-operator-75ffdb6fcd-k6wjq\" (UID: \"54acf8be-ab9f-4e85-8394-dfafbf121b67\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.261422 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqqlg\" (UniqueName: \"kubernetes.io/projected/8e68c79d-159b-48a1-ba29-0ecebeeaf581-kube-api-access-gqqlg\") pod \"machine-config-operator-67c9d58cbb-zg88l\" (UID: \"8e68c79d-159b-48a1-ba29-0ecebeeaf581\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.286885 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sqnw\" (UniqueName: \"kubernetes.io/projected/edaa9a3d-ab5b-4848-a9a1-9af97200e40e-kube-api-access-8sqnw\") pod \"olm-operator-5cdf44d969-9vxhd\" (UID: \"edaa9a3d-ab5b-4848-a9a1-9af97200e40e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.305954 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjlnd\" (UniqueName: \"kubernetes.io/projected/3a805538-2592-40cf-9131-444b1c0f3cbb-kube-api-access-rjlnd\") pod \"router-default-68cf44c8b8-gmfcd\" (UID: \"3a805538-2592-40cf-9131-444b1c0f3cbb\") " pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.320817 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdts\" (UniqueName: \"kubernetes.io/projected/05c29cb3-f6e5-40f4-8780-0a98b0c52330-kube-api-access-zrdts\") pod \"migrator-866fcbc849-fvnk8\" (UID: \"05c29cb3-f6e5-40f4-8780-0a98b0c52330\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.344504 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfkmz\" (UniqueName: \"kubernetes.io/projected/30f80be9-b4ee-419d-82c1-0341637259d2-kube-api-access-tfkmz\") pod \"machine-config-controller-f9cdd68f7-8f5xb\" (UID: \"30f80be9-b4ee-419d-82c1-0341637259d2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.365456 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgkb9\" (UniqueName: \"kubernetes.io/projected/780bd316-cfcd-4cf9-b0b9-291cccd331fb-kube-api-access-bgkb9\") pod \"authentication-operator-7f5c659b84-mt9s7\" (UID: \"780bd316-cfcd-4cf9-b0b9-291cccd331fb\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.381946 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2929140b-fc72-423b-a599-6893f45f258b-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-fd2rm\" (UID: \"2929140b-fc72-423b-a599-6893f45f258b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.394206 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.403106 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-svn2b\" (UniqueName: \"kubernetes.io/projected/f0b5a231-2ef1-425a-ae19-7ff305bed3e0-kube-api-access-svn2b\") pod \"packageserver-7d4fc7d867-j8mqx\" (UID: \"f0b5a231-2ef1-425a-ae19-7ff305bed3e0\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.420717 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.420891 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.430074 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.432986 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpjcw\" (UniqueName: \"kubernetes.io/projected/2c8310dc-cc1c-4c06-a86e-4147e8b11a7c-kube-api-access-zpjcw\") pod \"service-ca-74545575db-xk7zk\" (UID: \"2c8310dc-cc1c-4c06-a86e-4147e8b11a7c\") " pod="openshift-service-ca/service-ca-74545575db-xk7zk" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.436789 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.445373 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97ee1eef-ee44-442d-ab55-8ae1d6bd364a-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-zshjz\" (UID: \"97ee1eef-ee44-442d-ab55-8ae1d6bd364a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.450519 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.464731 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.466956 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.471812 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d17d486a-64e1-4f70-ba16-90bdc93550e7-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.476243 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.489097 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9g54\" (UniqueName: \"kubernetes.io/projected/d17d486a-64e1-4f70-ba16-90bdc93550e7-kube-api-access-x9g54\") pod \"cluster-image-registry-operator-86c45576b9-z9n55\" (UID: \"d17d486a-64e1-4f70-ba16-90bdc93550e7\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.491458 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.500805 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.509941 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bplr5\" (UniqueName: \"kubernetes.io/projected/830cfba4-378e-4f83-a28a-add80c1cb7e0-kube-api-access-bplr5\") pod \"etcd-operator-69b85846b6-599mk\" (UID: \"830cfba4-378e-4f83-a28a-add80c1cb7e0\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.522041 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.529511 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5vz8\" (UniqueName: \"kubernetes.io/projected/5ae023e2-1168-43b9-83b4-49ff02bb9ea4-kube-api-access-j5vz8\") pod \"console-64d44f6ddf-vz6mj\" (UID: \"5ae023e2-1168-43b9-83b4-49ff02bb9ea4\") " pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.533818 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.545212 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.547339 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b64ff0-878d-4b3d-8b8f-67cf8ea08987-kube-api-access\") pod \"kube-apiserver-operator-575994946d-4q264\" (UID: \"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.566145 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.585639 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.590500 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.586634 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.611107 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.626184 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" event={"ID":"be18e81d-0071-4e9b-a38a-a2ad3052c0bc","Type":"ContainerStarted","Data":"58684224b811007a8cbd4e8b7aea2e230f7f4fff3e00885fb5b5a94910870798"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.628059 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.634245 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" event={"ID":"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead","Type":"ContainerStarted","Data":"8087b9e0bd574be9fc945c94f2d4ac2942ca3804b665f968a797f17954925868"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.634301 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" event={"ID":"bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead","Type":"ContainerStarted","Data":"ad269d9c846cc229e2f47e252ca838e8f6c69ce5bf4fd0004876ae44fb6fa64d"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.647034 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.648873 5119 generic.go:358] "Generic (PLEG): container finished" podID="da030a07-1d86-4b99-81e4-94c7ee3b5d58" containerID="08f8d03a4aec4ff79b4500886a9ea74aa6d797876b82ddf237d0eec67cd4f53e" exitCode=0 Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.648959 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" event={"ID":"da030a07-1d86-4b99-81e4-94c7ee3b5d58","Type":"ContainerDied","Data":"08f8d03a4aec4ff79b4500886a9ea74aa6d797876b82ddf237d0eec67cd4f53e"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.654185 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" event={"ID":"017eb489-ceff-4140-a56b-f631b7fab529","Type":"ContainerStarted","Data":"f43c9aafd44a5f9954fe4e619dbeb7006f6bd52e1ad2b559f16530df9cffab5b"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.654235 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" event={"ID":"017eb489-ceff-4140-a56b-f631b7fab529","Type":"ContainerStarted","Data":"3cae38518d9fa28c40be15275c81c1f7ef0be6347c2009cec00021e85bd05d79"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.654248 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" event={"ID":"017eb489-ceff-4140-a56b-f631b7fab529","Type":"ContainerStarted","Data":"e2497bf27b5b9b14a52559c6a96efc8886f0c2d617ecf50efac7f41ce10317d8"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.662806 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.674265 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" event={"ID":"545f05c6-70cf-4ce3-ad50-ad4e264680ca","Type":"ContainerStarted","Data":"bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.675198 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.678851 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" event={"ID":"0ca936b9-5497-4d4a-9927-6a7742282c62","Type":"ContainerStarted","Data":"e9378ab9906893dcc4dd6d49ba85b93064fd26123b8b6599fea8d40fd73c9c88"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.684137 5119 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-45lmz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.684179 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" podUID="545f05c6-70cf-4ce3-ad50-ad4e264680ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.688685 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-xk7zk" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.698761 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" event={"ID":"863693a2-ad89-4568-abd2-d96f7f9db45e","Type":"ContainerStarted","Data":"d2e3de79c863086e324c570ca8a8042dda6949ba8695039184321c2f41b07bfc"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.710462 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" event={"ID":"425ed829-90c3-45f1-b162-107726016bfd","Type":"ContainerStarted","Data":"84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.710749 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.724776 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l"] Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.727423 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" event={"ID":"e420bb9a-54f6-426e-a975-a5fbd88ebcfe","Type":"ContainerStarted","Data":"497218324d6dbaa97ffc5ac44f75817d75ea7cd56362157b9dd1cea726a851a6"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.751401 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" event={"ID":"3a805538-2592-40cf-9131-444b1c0f3cbb","Type":"ContainerStarted","Data":"f6e74513935af988745ac948e4d3408a2cb5f1df1536db966f25efc068b78e05"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.755642 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" event={"ID":"cd98dcc8-4803-40df-a2d1-88f48b1e14b1","Type":"ContainerStarted","Data":"760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be"} Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.756746 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.777277 5119 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-gx2g8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.777355 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" podUID="cd98dcc8-4803-40df-a2d1-88f48b1e14b1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.843378 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj"] Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.914817 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" podStartSLOduration=80.914800037 podStartE2EDuration="1m20.914800037s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:59.912626509 +0000 UTC m=+101.891590811" watchObservedRunningTime="2026-02-20 00:11:59.914800037 +0000 UTC m=+101.893764339" Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.916907 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq"] Feb 20 00:11:59 crc kubenswrapper[5119]: I0220 00:11:59.980787 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" podStartSLOduration=80.98076894 podStartE2EDuration="1m20.98076894s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:59.948338898 +0000 UTC m=+101.927303190" watchObservedRunningTime="2026-02-20 00:11:59.98076894 +0000 UTC m=+101.959733232" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.037654 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-672k9" podStartSLOduration=81.037633278 podStartE2EDuration="1m21.037633278s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:00.036125728 +0000 UTC m=+102.015090020" watchObservedRunningTime="2026-02-20 00:12:00.037633278 +0000 UTC m=+102.016597570" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.078865 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" podStartSLOduration=81.078839475 podStartE2EDuration="1m21.078839475s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:00.077598502 +0000 UTC m=+102.056562794" watchObservedRunningTime="2026-02-20 00:12:00.078839475 +0000 UTC m=+102.057803767" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.203403 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29525760-qdjzk" podStartSLOduration=81.203383673 podStartE2EDuration="1m21.203383673s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:00.201055101 +0000 UTC m=+102.180019403" watchObservedRunningTime="2026-02-20 00:12:00.203383673 +0000 UTC m=+102.182347965" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.409409 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" podStartSLOduration=81.409382669 podStartE2EDuration="1m21.409382669s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:00.408972028 +0000 UTC m=+102.387936350" watchObservedRunningTime="2026-02-20 00:12:00.409382669 +0000 UTC m=+102.388346961" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.409737 5119 ???:1] "http: TLS handshake error from 192.168.126.11:57450: no serving certificate available for the kubelet" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.454189 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-g2szw"] Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.484655 5119 ???:1] "http: TLS handshake error from 192.168.126.11:57460: no serving certificate available for the kubelet" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.595106 5119 ???:1] "http: TLS handshake error from 192.168.126.11:57462: no serving certificate available for the kubelet" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.672896 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr"] Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.693259 5119 ???:1] "http: TLS handshake error from 192.168.126.11:57466: no serving certificate available for the kubelet" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.697599 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb"] Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.718684 5119 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-h4nz2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.718763 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" podUID="425ed829-90c3-45f1-b162-107726016bfd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.765876 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8"] Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.797204 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx"] Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.802137 5119 ???:1] "http: TLS handshake error from 192.168.126.11:57482: no serving certificate available for the kubelet" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.890945 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" event={"ID":"8eb821d0-6b6b-422d-bbc5-8394778f57cd","Type":"ContainerStarted","Data":"59cc4b86a011e1dff3a1479ff0db34f39fd05037125f8762f1e5de15f5fc2691"} Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.922113 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" event={"ID":"da030a07-1d86-4b99-81e4-94c7ee3b5d58","Type":"ContainerStarted","Data":"1453da1bdc47717a31eed2dc66d59edec8508ffa6bc19b0f6fcd4355062fda4e"} Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.922766 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.927442 5119 ???:1] "http: TLS handshake error from 192.168.126.11:57484: no serving certificate available for the kubelet" Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.944284 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-vz6mj"] Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.957745 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd"] Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.965403 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7"] Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.978167 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" event={"ID":"30f80be9-b4ee-419d-82c1-0341637259d2","Type":"ContainerStarted","Data":"c220e154f043fba9ed97d09eeb3816690a13982c9530a3ef0e286b2b85993ff7"} Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.982874 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" event={"ID":"3a805538-2592-40cf-9131-444b1c0f3cbb","Type":"ContainerStarted","Data":"d67151f318390b80d3832df195af6a1bda8acf3d27af6ac797ca5a2475be77fb"} Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.984131 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55"] Feb 20 00:12:00 crc kubenswrapper[5119]: I0220 00:12:00.992162 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" event={"ID":"1a8aed0e-0d53-4db2-ac89-b0c953e8fc36","Type":"ContainerStarted","Data":"bbd0f254380f5c36852c5e24c803656a0369f10d595f2e26556fb5169806ca93"} Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.000748 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264"] Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.001432 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm"] Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.016065 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq" event={"ID":"54acf8be-ab9f-4e85-8394-dfafbf121b67","Type":"ContainerStarted","Data":"2b8f74babc04d9055f5ed8d7014de52263c1b10142a2964230e5c346c993c55c"} Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.016446 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq" event={"ID":"54acf8be-ab9f-4e85-8394-dfafbf121b67","Type":"ContainerStarted","Data":"55e88b701923ba52a193b64af8ee25bcfd65df15c4f79d56f01925feaf769041"} Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.031973 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" podStartSLOduration=82.031948072 podStartE2EDuration="1m22.031948072s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.030194065 +0000 UTC m=+103.009158367" watchObservedRunningTime="2026-02-20 00:12:01.031948072 +0000 UTC m=+103.010912364" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.032854 5119 ???:1] "http: TLS handshake error from 192.168.126.11:57488: no serving certificate available for the kubelet" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.033105 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" event={"ID":"518958b8-c3df-4db5-87c9-6528f05e4bfe","Type":"ContainerStarted","Data":"fbdfe4b044ef4ffcf5f108987f92fe0ac4fb035a9e618e433d01d2ac575dd577"} Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.033153 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" event={"ID":"518958b8-c3df-4db5-87c9-6528f05e4bfe","Type":"ContainerStarted","Data":"f96146d04c2df7a63db59ed0a3bc8ef094f542bed4aaf94d54a79cb1de8ed967"} Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.034206 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.038599 5119 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-6tsqj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.038752 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" podUID="518958b8-c3df-4db5-87c9-6528f05e4bfe" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.048061 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-xk7zk"] Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.062965 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xklbn" podStartSLOduration=81.062940014 podStartE2EDuration="1m21.062940014s" podCreationTimestamp="2026-02-20 00:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.061372532 +0000 UTC m=+103.040336844" watchObservedRunningTime="2026-02-20 00:12:01.062940014 +0000 UTC m=+103.041904306" Feb 20 00:12:01 crc kubenswrapper[5119]: W0220 00:12:01.065944 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1b64ff0_878d_4b3d_8b8f_67cf8ea08987.slice/crio-e9026fcb6f7fe08c2fb954566a199b794300b77dd9fe2f9f8f0114a0c75381a7 WatchSource:0}: Error finding container e9026fcb6f7fe08c2fb954566a199b794300b77dd9fe2f9f8f0114a0c75381a7: Status 404 returned error can't find the container with id e9026fcb6f7fe08c2fb954566a199b794300b77dd9fe2f9f8f0114a0c75381a7 Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.066588 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" event={"ID":"8e68c79d-159b-48a1-ba29-0ecebeeaf581","Type":"ContainerStarted","Data":"14c0f1692200d0fda0ad9200f85c6d8b32cedb54d3bfec15764e53b405afad01"} Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.066684 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" event={"ID":"8e68c79d-159b-48a1-ba29-0ecebeeaf581","Type":"ContainerStarted","Data":"6c288bf3d6904f957dd27c7ac9d10271e73f072249bb16d3943f83a3d1b44c36"} Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.066698 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" event={"ID":"8e68c79d-159b-48a1-ba29-0ecebeeaf581","Type":"ContainerStarted","Data":"91f31d5bf1f5c4df4d3fcda83c6464be54b893f711d9e66d160e2003db49cbad"} Feb 20 00:12:01 crc kubenswrapper[5119]: W0220 00:12:01.072225 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2929140b_fc72_423b_a599_6893f45f258b.slice/crio-c9433f700679fe034a3a725b1ca42a6fde688eeb33de5283f320a552f62bcf0c WatchSource:0}: Error finding container c9433f700679fe034a3a725b1ca42a6fde688eeb33de5283f320a552f62bcf0c: Status 404 returned error can't find the container with id c9433f700679fe034a3a725b1ca42a6fde688eeb33de5283f320a552f62bcf0c Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.083091 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.111045 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-599mk"] Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.130621 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" podStartSLOduration=82.130598302 podStartE2EDuration="1m22.130598302s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.126090221 +0000 UTC m=+103.105054533" watchObservedRunningTime="2026-02-20 00:12:01.130598302 +0000 UTC m=+103.109562594" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.144391 5119 ???:1] "http: TLS handshake error from 192.168.126.11:57504: no serving certificate available for the kubelet" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.235288 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" podStartSLOduration=81.235263886 podStartE2EDuration="1m21.235263886s" podCreationTimestamp="2026-02-20 00:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.19785346 +0000 UTC m=+103.176817762" watchObservedRunningTime="2026-02-20 00:12:01.235263886 +0000 UTC m=+103.214228178" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.237012 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-djfdx" podStartSLOduration=82.237006312 podStartE2EDuration="1m22.237006312s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.234500086 +0000 UTC m=+103.213464388" watchObservedRunningTime="2026-02-20 00:12:01.237006312 +0000 UTC m=+103.215970604" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.465673 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.467899 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-gmfcd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.467997 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podUID="3a805538-2592-40cf-9131-444b1c0f3cbb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.483739 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-psrg4" podStartSLOduration=82.483717074 podStartE2EDuration="1m22.483717074s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.481691109 +0000 UTC m=+103.460655441" watchObservedRunningTime="2026-02-20 00:12:01.483717074 +0000 UTC m=+103.462681376" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.561688 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-wqzpb" podStartSLOduration=82.561666498 podStartE2EDuration="1m22.561666498s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.560131967 +0000 UTC m=+103.539096259" watchObservedRunningTime="2026-02-20 00:12:01.561666498 +0000 UTC m=+103.540630790" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.683516 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-k6wjq" podStartSLOduration=82.683496322 podStartE2EDuration="1m22.683496322s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.640772955 +0000 UTC m=+103.619737247" watchObservedRunningTime="2026-02-20 00:12:01.683496322 +0000 UTC m=+103.662460614" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.760250 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" podStartSLOduration=82.760229085 podStartE2EDuration="1m22.760229085s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.722734157 +0000 UTC m=+103.701698449" watchObservedRunningTime="2026-02-20 00:12:01.760229085 +0000 UTC m=+103.739193377" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.799915 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-zg88l" podStartSLOduration=82.799893691 podStartE2EDuration="1m22.799893691s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.798650798 +0000 UTC m=+103.777615090" watchObservedRunningTime="2026-02-20 00:12:01.799893691 +0000 UTC m=+103.778857983" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.800738 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" podStartSLOduration=82.800734303 podStartE2EDuration="1m22.800734303s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.761637633 +0000 UTC m=+103.740601925" watchObservedRunningTime="2026-02-20 00:12:01.800734303 +0000 UTC m=+103.779698595" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.824042 5119 ???:1] "http: TLS handshake error from 192.168.126.11:57508: no serving certificate available for the kubelet" Feb 20 00:12:01 crc kubenswrapper[5119]: I0220 00:12:01.884179 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podStartSLOduration=82.884160466 podStartE2EDuration="1m22.884160466s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:01.848730293 +0000 UTC m=+103.827694585" watchObservedRunningTime="2026-02-20 00:12:01.884160466 +0000 UTC m=+103.863124758" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.034646 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.035064 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.067841 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.068758 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.094860 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.174046 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" event={"ID":"8eb821d0-6b6b-422d-bbc5-8394778f57cd","Type":"ContainerStarted","Data":"8d87b881fcd41ef7b76e8e29d2315af56c1b6d7236008d67efb3605f6fed708e"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.184093 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" event={"ID":"2929140b-fc72-423b-a599-6893f45f258b","Type":"ContainerStarted","Data":"c9433f700679fe034a3a725b1ca42a6fde688eeb33de5283f320a552f62bcf0c"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.224598 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" event={"ID":"30f80be9-b4ee-419d-82c1-0341637259d2","Type":"ContainerStarted","Data":"e96ab6bf0d6846e4e3c32c91391d88b35ee3b722d45b6b36f935318a276eca5d"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.255803 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-vz6mj" event={"ID":"5ae023e2-1168-43b9-83b4-49ff02bb9ea4","Type":"ContainerStarted","Data":"e8adff536c7cc2a578a9f79e8adb3285e10531e57d2bee0cdb83ab0450eca106"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.255863 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-vz6mj" event={"ID":"5ae023e2-1168-43b9-83b4-49ff02bb9ea4","Type":"ContainerStarted","Data":"6df0cb570dfe1b72776e5de1c0e7a67d3b402bee8b4d5eb530c916df0b3e2f92"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.259834 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" event={"ID":"780bd316-cfcd-4cf9-b0b9-291cccd331fb","Type":"ContainerStarted","Data":"6c41104867f21498e628863e51e6368e2d85f202ffe2a8c3ee43741e8ee843fc"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.259867 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" event={"ID":"780bd316-cfcd-4cf9-b0b9-291cccd331fb","Type":"ContainerStarted","Data":"85b00f157b5d4f861072ec3404fc76f50656ca18268be1859d71c85ef8e7d14a"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.277775 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" event={"ID":"d17d486a-64e1-4f70-ba16-90bdc93550e7","Type":"ContainerStarted","Data":"07783cbb2ab6ee43ff62a7baa477c5efd59f1173955475696c86c1d5e2ac5ba0"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.277845 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" event={"ID":"d17d486a-64e1-4f70-ba16-90bdc93550e7","Type":"ContainerStarted","Data":"2cfd03c331a90d053c43cd44e4291220cb6780a973ff10f6f58cedc4e0c99b6c"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.294093 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-xk7zk" event={"ID":"2c8310dc-cc1c-4c06-a86e-4147e8b11a7c","Type":"ContainerStarted","Data":"4a48be713bf4a0aa8196d01a9b27e05e25e04ff421a2fbc9ed7df320bce1fa6f"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.311300 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" event={"ID":"1a8aed0e-0d53-4db2-ac89-b0c953e8fc36","Type":"ContainerStarted","Data":"65e05b1bfd6b5bee75a714fd237b2f26a11f7527f17db42c5061fc8e20b847d0"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.311721 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.336524 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-mt9s7" podStartSLOduration=83.336409681 podStartE2EDuration="1m23.336409681s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:02.333209934 +0000 UTC m=+104.312174216" watchObservedRunningTime="2026-02-20 00:12:02.336409681 +0000 UTC m=+104.315373973" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.340001 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" event={"ID":"edaa9a3d-ab5b-4848-a9a1-9af97200e40e","Type":"ContainerStarted","Data":"cc22e71b625a13add5a7c2ccbe5d6ca90c15cd2f3ac24e8b9d96266fa72a8c13"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.340079 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" event={"ID":"edaa9a3d-ab5b-4848-a9a1-9af97200e40e","Type":"ContainerStarted","Data":"013ea98edab495a31740f2cc754c0f69db760f3b49fd316a0b158a7e63da1475"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.340711 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.343262 5119 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-9vxhd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.343316 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" podUID="edaa9a3d-ab5b-4848-a9a1-9af97200e40e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.349881 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" event={"ID":"830cfba4-378e-4f83-a28a-add80c1cb7e0","Type":"ContainerStarted","Data":"7c1ec0842977a7017681d23599ddae0e7fcf05e9279febb99889a0340b631737"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.351824 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8" event={"ID":"05c29cb3-f6e5-40f4-8780-0a98b0c52330","Type":"ContainerStarted","Data":"2f073d9ebf31c6f1fac4b3b27d2cdef2beccf6c84ce12f7e49884297b74fc277"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.351846 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8" event={"ID":"05c29cb3-f6e5-40f4-8780-0a98b0c52330","Type":"ContainerStarted","Data":"757482a016a48c9dcc98241c4816c0f8e5565cb406501da58698b38523783b33"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.366609 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" event={"ID":"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987","Type":"ContainerStarted","Data":"e9026fcb6f7fe08c2fb954566a199b794300b77dd9fe2f9f8f0114a0c75381a7"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.384245 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-z9n55" podStartSLOduration=83.384224236 podStartE2EDuration="1m23.384224236s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:02.38328041 +0000 UTC m=+104.362244712" watchObservedRunningTime="2026-02-20 00:12:02.384224236 +0000 UTC m=+104.363188528" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.395243 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" event={"ID":"f0b5a231-2ef1-425a-ae19-7ff305bed3e0","Type":"ContainerStarted","Data":"8c930a22f79b9d7febc920f4ca91313889b00ae814eb066faa797afcc4f44b3c"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.395302 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" event={"ID":"f0b5a231-2ef1-425a-ae19-7ff305bed3e0","Type":"ContainerStarted","Data":"e12dc1fc8bddac216014355748117360fc8e121e5723bd5d3ecb746c3c3a22c5"} Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.395728 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.413754 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-vz6mj" podStartSLOduration=83.413723709 podStartE2EDuration="1m23.413723709s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:02.407499441 +0000 UTC m=+104.386463733" watchObservedRunningTime="2026-02-20 00:12:02.413723709 +0000 UTC m=+104.392688001" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.414138 5119 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-j8mqx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.414188 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" podUID="f0b5a231-2ef1-425a-ae19-7ff305bed3e0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.439714 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" podStartSLOduration=83.439693286 podStartE2EDuration="1m23.439693286s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:02.437712533 +0000 UTC m=+104.416676825" watchObservedRunningTime="2026-02-20 00:12:02.439693286 +0000 UTC m=+104.418657578" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.470279 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-gmfcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:12:02 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Feb 20 00:12:02 crc kubenswrapper[5119]: [+]process-running ok Feb 20 00:12:02 crc kubenswrapper[5119]: healthz check failed Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.470355 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podUID="3a805538-2592-40cf-9131-444b1c0f3cbb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.476453 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" podStartSLOduration=83.476432124 podStartE2EDuration="1m23.476432124s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:02.475464258 +0000 UTC m=+104.454428570" watchObservedRunningTime="2026-02-20 00:12:02.476432124 +0000 UTC m=+104.455396416" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.499358 5119 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-8qbnc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 20 00:12:02 crc kubenswrapper[5119]: [+]log ok Feb 20 00:12:02 crc kubenswrapper[5119]: [+]etcd ok Feb 20 00:12:02 crc kubenswrapper[5119]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 20 00:12:02 crc kubenswrapper[5119]: [+]poststarthook/generic-apiserver-start-informers ok Feb 20 00:12:02 crc kubenswrapper[5119]: [+]poststarthook/max-in-flight-filter ok Feb 20 00:12:02 crc kubenswrapper[5119]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 20 00:12:02 crc kubenswrapper[5119]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 20 00:12:02 crc kubenswrapper[5119]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 20 00:12:02 crc kubenswrapper[5119]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 20 00:12:02 crc kubenswrapper[5119]: [+]poststarthook/project.openshift.io-projectcache ok Feb 20 00:12:02 crc kubenswrapper[5119]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 20 00:12:02 crc kubenswrapper[5119]: [+]poststarthook/openshift.io-startinformers ok Feb 20 00:12:02 crc kubenswrapper[5119]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 20 00:12:02 crc kubenswrapper[5119]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 20 00:12:02 crc kubenswrapper[5119]: livez check failed Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.499446 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" podUID="bf4f78c7-49fa-42fb-9fe9-8cc8594b2ead" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.538053 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" podStartSLOduration=83.53802866 podStartE2EDuration="1m23.53802866s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:02.535696937 +0000 UTC m=+104.514661249" watchObservedRunningTime="2026-02-20 00:12:02.53802866 +0000 UTC m=+104.516992952" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.567178 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" podStartSLOduration=83.567150662 podStartE2EDuration="1m23.567150662s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:02.565431286 +0000 UTC m=+104.544395578" watchObservedRunningTime="2026-02-20 00:12:02.567150662 +0000 UTC m=+104.546114944" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.878008 5119 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.892247 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" Feb 20 00:12:02 crc kubenswrapper[5119]: I0220 00:12:02.894193 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:02 crc kubenswrapper[5119]: E0220 00:12:02.894637 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:03.394619053 +0000 UTC m=+105.373583345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.015134 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.015443 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-mountpoint-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.015475 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pgg2\" (UniqueName: \"kubernetes.io/projected/3ac40fa4-748c-4728-8297-697ebf1bf757-kube-api-access-8pgg2\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.015499 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58dea456-427d-4599-a6b3-9ea369e332b2-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6kwxp\" (UID: \"58dea456-427d-4599-a6b3-9ea369e332b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.015519 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58dea456-427d-4599-a6b3-9ea369e332b2-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6kwxp\" (UID: \"58dea456-427d-4599-a6b3-9ea369e332b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.015588 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-tls\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: E0220 00:12:03.015835 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:03.51579954 +0000 UTC m=+105.494763832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.018050 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ac40fa4-748c-4728-8297-697ebf1bf757-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.018100 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/202de310-1c98-4a10-9ea6-b32d9debbd8c-tmp-dir\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.018462 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1a0d9df3-9b02-4473-bf24-e9af2df871e6-tmp-dir\") pod \"dns-operator-799b87ffcd-whcrz\" (UID: \"1a0d9df3-9b02-4473-bf24-e9af2df871e6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.018529 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89dsk\" (UniqueName: \"kubernetes.io/projected/374127c8-2589-49b2-9945-92cecb2993de-kube-api-access-89dsk\") pod \"machine-config-server-nglnc\" (UID: \"374127c8-2589-49b2-9945-92cecb2993de\") " pod="openshift-machine-config-operator/machine-config-server-nglnc" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.019794 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-config\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.020001 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-registration-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.020093 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/202de310-1c98-4a10-9ea6-b32d9debbd8c-metrics-tls\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.020113 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82841644-be5e-4c71-a968-887f117d7b66-serving-cert\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.020133 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/374127c8-2589-49b2-9945-92cecb2993de-node-bootstrap-token\") pod \"machine-config-server-nglnc\" (UID: \"374127c8-2589-49b2-9945-92cecb2993de\") " pod="openshift-machine-config-operator/machine-config-server-nglnc" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.020150 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrj7\" (UniqueName: \"kubernetes.io/projected/760dbf42-7c22-4f3b-926d-88e1ce4e2963-kube-api-access-5xrj7\") pod \"ingress-canary-sn42f\" (UID: \"760dbf42-7c22-4f3b-926d-88e1ce4e2963\") " pod="openshift-ingress-canary/ingress-canary-sn42f" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.020335 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-trusted-ca\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.020485 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4bvf\" (UniqueName: \"kubernetes.io/projected/202de310-1c98-4a10-9ea6-b32d9debbd8c-kube-api-access-w4bvf\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.020567 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.020967 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.020989 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-ready\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.021031 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-csi-data-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.021048 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgbbh\" (UniqueName: \"kubernetes.io/projected/82841644-be5e-4c71-a968-887f117d7b66-kube-api-access-wgbbh\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.021103 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-bound-sa-token\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.021211 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ac40fa4-748c-4728-8297-697ebf1bf757-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.021387 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a0d9df3-9b02-4473-bf24-e9af2df871e6-metrics-tls\") pod \"dns-operator-799b87ffcd-whcrz\" (UID: \"1a0d9df3-9b02-4473-bf24-e9af2df871e6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.021417 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49v5x\" (UniqueName: \"kubernetes.io/projected/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-kube-api-access-49v5x\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.022963 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-certificates\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.023312 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkg6w\" (UniqueName: \"kubernetes.io/projected/58dea456-427d-4599-a6b3-9ea369e332b2-kube-api-access-tkg6w\") pod \"kube-storage-version-migrator-operator-565b79b866-6kwxp\" (UID: \"58dea456-427d-4599-a6b3-9ea369e332b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.024033 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bqq9\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-kube-api-access-7bqq9\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.024078 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/760dbf42-7c22-4f3b-926d-88e1ce4e2963-cert\") pod \"ingress-canary-sn42f\" (UID: \"760dbf42-7c22-4f3b-926d-88e1ce4e2963\") " pod="openshift-ingress-canary/ingress-canary-sn42f" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.024178 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/374127c8-2589-49b2-9945-92cecb2993de-certs\") pod \"machine-config-server-nglnc\" (UID: \"374127c8-2589-49b2-9945-92cecb2993de\") " pod="openshift-machine-config-operator/machine-config-server-nglnc" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.024208 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.025982 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-socket-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.026871 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c8rk\" (UniqueName: \"kubernetes.io/projected/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-kube-api-access-7c8rk\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.027760 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82841644-be5e-4c71-a968-887f117d7b66-trusted-ca\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.028320 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d82q\" (UniqueName: \"kubernetes.io/projected/965b8822-0f09-43d5-a864-775966130c7d-kube-api-access-9d82q\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.028470 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.028494 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: E0220 00:12:03.028949 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:03.528933183 +0000 UTC m=+105.507897475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.029333 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-ca-trust-extracted\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.030247 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-installation-pull-secrets\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.030310 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202de310-1c98-4a10-9ea6-b32d9debbd8c-config-volume\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.030355 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm7c9\" (UniqueName: \"kubernetes.io/projected/8492563b-3d36-4b7a-a88b-ffd27fd60eb1-kube-api-access-cm7c9\") pod \"downloads-747b44746d-8mw6h\" (UID: \"8492563b-3d36-4b7a-a88b-ffd27fd60eb1\") " pod="openshift-console/downloads-747b44746d-8mw6h" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.030459 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7bd6\" (UniqueName: \"kubernetes.io/projected/1a0d9df3-9b02-4473-bf24-e9af2df871e6-kube-api-access-q7bd6\") pod \"dns-operator-799b87ffcd-whcrz\" (UID: \"1a0d9df3-9b02-4473-bf24-e9af2df871e6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.030585 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ac40fa4-748c-4728-8297-697ebf1bf757-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.030611 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-plugins-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.030659 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82841644-be5e-4c71-a968-887f117d7b66-config\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.133306 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.133814 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ac40fa4-748c-4728-8297-697ebf1bf757-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.133846 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/202de310-1c98-4a10-9ea6-b32d9debbd8c-tmp-dir\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.133875 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1a0d9df3-9b02-4473-bf24-e9af2df871e6-tmp-dir\") pod \"dns-operator-799b87ffcd-whcrz\" (UID: \"1a0d9df3-9b02-4473-bf24-e9af2df871e6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.133898 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-89dsk\" (UniqueName: \"kubernetes.io/projected/374127c8-2589-49b2-9945-92cecb2993de-kube-api-access-89dsk\") pod \"machine-config-server-nglnc\" (UID: \"374127c8-2589-49b2-9945-92cecb2993de\") " pod="openshift-machine-config-operator/machine-config-server-nglnc" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.133928 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-config\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.133958 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-registration-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.133997 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/202de310-1c98-4a10-9ea6-b32d9debbd8c-metrics-tls\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134019 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82841644-be5e-4c71-a968-887f117d7b66-serving-cert\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134042 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/374127c8-2589-49b2-9945-92cecb2993de-node-bootstrap-token\") pod \"machine-config-server-nglnc\" (UID: \"374127c8-2589-49b2-9945-92cecb2993de\") " pod="openshift-machine-config-operator/machine-config-server-nglnc" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134063 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5xrj7\" (UniqueName: \"kubernetes.io/projected/760dbf42-7c22-4f3b-926d-88e1ce4e2963-kube-api-access-5xrj7\") pod \"ingress-canary-sn42f\" (UID: \"760dbf42-7c22-4f3b-926d-88e1ce4e2963\") " pod="openshift-ingress-canary/ingress-canary-sn42f" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134098 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-trusted-ca\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134118 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4bvf\" (UniqueName: \"kubernetes.io/projected/202de310-1c98-4a10-9ea6-b32d9debbd8c-kube-api-access-w4bvf\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134140 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134169 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134191 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-ready\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134217 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-csi-data-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134240 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wgbbh\" (UniqueName: \"kubernetes.io/projected/82841644-be5e-4c71-a968-887f117d7b66-kube-api-access-wgbbh\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134270 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-bound-sa-token\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134298 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ac40fa4-748c-4728-8297-697ebf1bf757-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134330 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a0d9df3-9b02-4473-bf24-e9af2df871e6-metrics-tls\") pod \"dns-operator-799b87ffcd-whcrz\" (UID: \"1a0d9df3-9b02-4473-bf24-e9af2df871e6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134352 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-49v5x\" (UniqueName: \"kubernetes.io/projected/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-kube-api-access-49v5x\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134378 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-certificates\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134417 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tkg6w\" (UniqueName: \"kubernetes.io/projected/58dea456-427d-4599-a6b3-9ea369e332b2-kube-api-access-tkg6w\") pod \"kube-storage-version-migrator-operator-565b79b866-6kwxp\" (UID: \"58dea456-427d-4599-a6b3-9ea369e332b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134442 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7bqq9\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-kube-api-access-7bqq9\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134463 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/760dbf42-7c22-4f3b-926d-88e1ce4e2963-cert\") pod \"ingress-canary-sn42f\" (UID: \"760dbf42-7c22-4f3b-926d-88e1ce4e2963\") " pod="openshift-ingress-canary/ingress-canary-sn42f" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134495 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/374127c8-2589-49b2-9945-92cecb2993de-certs\") pod \"machine-config-server-nglnc\" (UID: \"374127c8-2589-49b2-9945-92cecb2993de\") " pod="openshift-machine-config-operator/machine-config-server-nglnc" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134520 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134590 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-socket-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134628 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7c8rk\" (UniqueName: \"kubernetes.io/projected/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-kube-api-access-7c8rk\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134670 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82841644-be5e-4c71-a968-887f117d7b66-trusted-ca\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134705 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9d82q\" (UniqueName: \"kubernetes.io/projected/965b8822-0f09-43d5-a864-775966130c7d-kube-api-access-9d82q\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134743 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134774 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-ca-trust-extracted\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134797 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-installation-pull-secrets\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134819 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202de310-1c98-4a10-9ea6-b32d9debbd8c-config-volume\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134848 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cm7c9\" (UniqueName: \"kubernetes.io/projected/8492563b-3d36-4b7a-a88b-ffd27fd60eb1-kube-api-access-cm7c9\") pod \"downloads-747b44746d-8mw6h\" (UID: \"8492563b-3d36-4b7a-a88b-ffd27fd60eb1\") " pod="openshift-console/downloads-747b44746d-8mw6h" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134883 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q7bd6\" (UniqueName: \"kubernetes.io/projected/1a0d9df3-9b02-4473-bf24-e9af2df871e6-kube-api-access-q7bd6\") pod \"dns-operator-799b87ffcd-whcrz\" (UID: \"1a0d9df3-9b02-4473-bf24-e9af2df871e6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.134918 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ac40fa4-748c-4728-8297-697ebf1bf757-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.135022 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.135407 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-ready\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.135417 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-registration-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.135504 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-csi-data-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.135524 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-plugins-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.135659 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82841644-be5e-4c71-a968-887f117d7b66-config\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.135770 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-mountpoint-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.135803 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8pgg2\" (UniqueName: \"kubernetes.io/projected/3ac40fa4-748c-4728-8297-697ebf1bf757-kube-api-access-8pgg2\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.135834 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58dea456-427d-4599-a6b3-9ea369e332b2-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6kwxp\" (UID: \"58dea456-427d-4599-a6b3-9ea369e332b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.135858 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58dea456-427d-4599-a6b3-9ea369e332b2-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6kwxp\" (UID: \"58dea456-427d-4599-a6b3-9ea369e332b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.135900 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-tls\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.142086 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ac40fa4-748c-4728-8297-697ebf1bf757-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.148206 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-certificates\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.151559 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-plugins-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.152461 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82841644-be5e-4c71-a968-887f117d7b66-config\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.152525 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-mountpoint-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.155154 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/202de310-1c98-4a10-9ea6-b32d9debbd8c-tmp-dir\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: E0220 00:12:03.155305 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:03.655276188 +0000 UTC m=+105.634240480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.162032 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58dea456-427d-4599-a6b3-9ea369e332b2-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6kwxp\" (UID: \"58dea456-427d-4599-a6b3-9ea369e332b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.162599 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-tls\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.163150 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.163517 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-ca-trust-extracted\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.164710 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82841644-be5e-4c71-a968-887f117d7b66-trusted-ca\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.165025 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.167283 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/965b8822-0f09-43d5-a864-775966130c7d-socket-dir\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.168092 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202de310-1c98-4a10-9ea6-b32d9debbd8c-config-volume\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.168582 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-config\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.170044 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-trusted-ca\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.175848 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/374127c8-2589-49b2-9945-92cecb2993de-certs\") pod \"machine-config-server-nglnc\" (UID: \"374127c8-2589-49b2-9945-92cecb2993de\") " pod="openshift-machine-config-operator/machine-config-server-nglnc" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.175908 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1a0d9df3-9b02-4473-bf24-e9af2df871e6-tmp-dir\") pod \"dns-operator-799b87ffcd-whcrz\" (UID: \"1a0d9df3-9b02-4473-bf24-e9af2df871e6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.176889 5119 ???:1] "http: TLS handshake error from 192.168.126.11:57514: no serving certificate available for the kubelet" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.178575 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58dea456-427d-4599-a6b3-9ea369e332b2-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6kwxp\" (UID: \"58dea456-427d-4599-a6b3-9ea369e332b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.183787 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/202de310-1c98-4a10-9ea6-b32d9debbd8c-metrics-tls\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.184018 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-bound-sa-token\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.187347 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a0d9df3-9b02-4473-bf24-e9af2df871e6-metrics-tls\") pod \"dns-operator-799b87ffcd-whcrz\" (UID: \"1a0d9df3-9b02-4473-bf24-e9af2df871e6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.189420 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/760dbf42-7c22-4f3b-926d-88e1ce4e2963-cert\") pod \"ingress-canary-sn42f\" (UID: \"760dbf42-7c22-4f3b-926d-88e1ce4e2963\") " pod="openshift-ingress-canary/ingress-canary-sn42f" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.190031 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/374127c8-2589-49b2-9945-92cecb2993de-node-bootstrap-token\") pod \"machine-config-server-nglnc\" (UID: \"374127c8-2589-49b2-9945-92cecb2993de\") " pod="openshift-machine-config-operator/machine-config-server-nglnc" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.190506 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82841644-be5e-4c71-a968-887f117d7b66-serving-cert\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.198824 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgbbh\" (UniqueName: \"kubernetes.io/projected/82841644-be5e-4c71-a968-887f117d7b66-kube-api-access-wgbbh\") pod \"console-operator-67c89758df-qg8t8\" (UID: \"82841644-be5e-4c71-a968-887f117d7b66\") " pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.199197 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.205736 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ac40fa4-748c-4728-8297-697ebf1bf757-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.205806 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-installation-pull-secrets\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.209106 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d82q\" (UniqueName: \"kubernetes.io/projected/965b8822-0f09-43d5-a864-775966130c7d-kube-api-access-9d82q\") pod \"csi-hostpathplugin-4mff9\" (UID: \"965b8822-0f09-43d5-a864-775966130c7d\") " pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.213848 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c8rk\" (UniqueName: \"kubernetes.io/projected/f09adde3-d2ec-4383-ae3f-1eb9491e0b3c-kube-api-access-7c8rk\") pod \"openshift-controller-manager-operator-686468bdd5-wwrm5\" (UID: \"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.214339 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkg6w\" (UniqueName: \"kubernetes.io/projected/58dea456-427d-4599-a6b3-9ea369e332b2-kube-api-access-tkg6w\") pod \"kube-storage-version-migrator-operator-565b79b866-6kwxp\" (UID: \"58dea456-427d-4599-a6b3-9ea369e332b2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.214373 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xrj7\" (UniqueName: \"kubernetes.io/projected/760dbf42-7c22-4f3b-926d-88e1ce4e2963-kube-api-access-5xrj7\") pod \"ingress-canary-sn42f\" (UID: \"760dbf42-7c22-4f3b-926d-88e1ce4e2963\") " pod="openshift-ingress-canary/ingress-canary-sn42f" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.227247 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pgg2\" (UniqueName: \"kubernetes.io/projected/3ac40fa4-748c-4728-8297-697ebf1bf757-kube-api-access-8pgg2\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.229231 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ac40fa4-748c-4728-8297-697ebf1bf757-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-hhj8g\" (UID: \"3ac40fa4-748c-4728-8297-697ebf1bf757\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.237233 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: E0220 00:12:03.237730 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:03.737711274 +0000 UTC m=+105.716675566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.238762 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm7c9\" (UniqueName: \"kubernetes.io/projected/8492563b-3d36-4b7a-a88b-ffd27fd60eb1-kube-api-access-cm7c9\") pod \"downloads-747b44746d-8mw6h\" (UID: \"8492563b-3d36-4b7a-a88b-ffd27fd60eb1\") " pod="openshift-console/downloads-747b44746d-8mw6h" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.239290 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4bvf\" (UniqueName: \"kubernetes.io/projected/202de310-1c98-4a10-9ea6-b32d9debbd8c-kube-api-access-w4bvf\") pod \"dns-default-8dmqk\" (UID: \"202de310-1c98-4a10-9ea6-b32d9debbd8c\") " pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.239984 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-89dsk\" (UniqueName: \"kubernetes.io/projected/374127c8-2589-49b2-9945-92cecb2993de-kube-api-access-89dsk\") pod \"machine-config-server-nglnc\" (UID: \"374127c8-2589-49b2-9945-92cecb2993de\") " pod="openshift-machine-config-operator/machine-config-server-nglnc" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.240788 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-49v5x\" (UniqueName: \"kubernetes.io/projected/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-kube-api-access-49v5x\") pod \"cni-sysctl-allowlist-ds-7c2z6\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.248008 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bqq9\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-kube-api-access-7bqq9\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.255237 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7bd6\" (UniqueName: \"kubernetes.io/projected/1a0d9df3-9b02-4473-bf24-e9af2df871e6-kube-api-access-q7bd6\") pod \"dns-operator-799b87ffcd-whcrz\" (UID: \"1a0d9df3-9b02-4473-bf24-e9af2df871e6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.268619 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz"] Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.288655 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.288771 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.305223 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.338517 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:03 crc kubenswrapper[5119]: E0220 00:12:03.339106 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:03.839083959 +0000 UTC m=+105.818048251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.347819 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4mff9" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.440127 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: E0220 00:12:03.440499 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:03.940484024 +0000 UTC m=+105.919448316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.441643 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.446395 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.449626 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.458797 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nglnc" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.469570 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-gmfcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:12:03 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Feb 20 00:12:03 crc kubenswrapper[5119]: [+]process-running ok Feb 20 00:12:03 crc kubenswrapper[5119]: healthz check failed Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.469999 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podUID="3a805538-2592-40cf-9131-444b1c0f3cbb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.470667 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-sn42f" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.477373 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8" event={"ID":"05c29cb3-f6e5-40f4-8780-0a98b0c52330","Type":"ContainerStarted","Data":"66df95679702794ea2740a5e9b6ccf48a67504a64681dcd1fa45e684cf8b0706"} Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.491371 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-4q264" event={"ID":"b1b64ff0-878d-4b3d-8b8f-67cf8ea08987","Type":"ContainerStarted","Data":"b914ef97ca2d95504c0aa43874bc7d42ab0e18af65356192f8db887694accb4e"} Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.497424 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.511901 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" event={"ID":"97ee1eef-ee44-442d-ab55-8ae1d6bd364a","Type":"ContainerStarted","Data":"999ac72ffb5cc8acf06d73ab302683b43a9de8ec118c1f28c5a009214b071dd3"} Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.512726 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fvnk8" podStartSLOduration=84.512715606 podStartE2EDuration="1m24.512715606s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:03.512069858 +0000 UTC m=+105.491034150" watchObservedRunningTime="2026-02-20 00:12:03.512715606 +0000 UTC m=+105.491679898" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.523113 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" event={"ID":"8eb821d0-6b6b-422d-bbc5-8394778f57cd","Type":"ContainerStarted","Data":"3f635306035e73d9b72988de998b23526ff13295e274e98f3a0e91ce264e8ed6"} Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.525160 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" event={"ID":"2929140b-fc72-423b-a599-6893f45f258b","Type":"ContainerStarted","Data":"60546ab8f8c9cbc4b212ce98da1af4c1efc1543575e5d273aa7f726dd6776d86"} Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.533245 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-8mw6h" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.541601 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:03 crc kubenswrapper[5119]: E0220 00:12:03.542371 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:04.042352152 +0000 UTC m=+106.021316444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.548301 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" event={"ID":"30f80be9-b4ee-419d-82c1-0341637259d2","Type":"ContainerStarted","Data":"e79222002f8751659a4c7106b7ecb419dc3bd6a39c1ce1ca185aefd4a2a4c90a"} Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.557492 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-xk7zk" event={"ID":"2c8310dc-cc1c-4c06-a86e-4147e8b11a7c","Type":"ContainerStarted","Data":"ec6752828ba38336eeee80dd32b81388d8190b4fd5708f942da6893e553932bd"} Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.580230 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" event={"ID":"1a8aed0e-0d53-4db2-ac89-b0c953e8fc36","Type":"ContainerStarted","Data":"ee8e767d8396fa0575333221eb4ce7c545dac1d6d0332897319e780a758c95bd"} Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.581178 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-g2szw" podStartSLOduration=84.581158315 podStartE2EDuration="1m24.581158315s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:03.578709939 +0000 UTC m=+105.557674261" watchObservedRunningTime="2026-02-20 00:12:03.581158315 +0000 UTC m=+105.560122607" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.644288 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-8f5xb" podStartSLOduration=84.644269201 podStartE2EDuration="1m24.644269201s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:03.642993877 +0000 UTC m=+105.621958179" watchObservedRunningTime="2026-02-20 00:12:03.644269201 +0000 UTC m=+105.623233493" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.645421 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: E0220 00:12:03.647489 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:04.147467587 +0000 UTC m=+106.126431879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.677084 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" event={"ID":"830cfba4-378e-4f83-a28a-add80c1cb7e0","Type":"ContainerStarted","Data":"890beafec8a13742349f993ee1ad2e41ece6f7fca195cea6f3eedf9cb9693edd"} Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.689148 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jnprp" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.691330 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9vxhd" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.697852 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.701296 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6tsqj" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.714733 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.716811 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fd2rm" podStartSLOduration=84.716787029 podStartE2EDuration="1m24.716787029s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:03.680094583 +0000 UTC m=+105.659058895" watchObservedRunningTime="2026-02-20 00:12:03.716787029 +0000 UTC m=+105.695751321" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.717081 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-xk7zk" podStartSLOduration=83.717076138 podStartE2EDuration="1m23.717076138s" podCreationTimestamp="2026-02-20 00:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:03.7156695 +0000 UTC m=+105.694633812" watchObservedRunningTime="2026-02-20 00:12:03.717076138 +0000 UTC m=+105.696040430" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.728791 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-s6ttl" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.742100 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8ll66"] Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.749043 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:03 crc kubenswrapper[5119]: E0220 00:12:03.751696 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:04.251674467 +0000 UTC m=+106.230638759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.787206 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.801964 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.803290 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8ll66"] Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.857077 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzpb\" (UniqueName: \"kubernetes.io/projected/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-kube-api-access-hqzpb\") pod \"certified-operators-8ll66\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.857159 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.857239 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-catalog-content\") pod \"certified-operators-8ll66\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.857353 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-utilities\") pod \"certified-operators-8ll66\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:03 crc kubenswrapper[5119]: E0220 00:12:03.863743 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:04.363724279 +0000 UTC m=+106.342688571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.927862 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-599mk" podStartSLOduration=84.927837692 podStartE2EDuration="1m24.927837692s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:03.886711717 +0000 UTC m=+105.865676009" watchObservedRunningTime="2026-02-20 00:12:03.927837692 +0000 UTC m=+105.906801984" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.958856 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.963004 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-utilities\") pod \"certified-operators-8ll66\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.963089 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hqzpb\" (UniqueName: \"kubernetes.io/projected/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-kube-api-access-hqzpb\") pod \"certified-operators-8ll66\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.963175 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-catalog-content\") pod \"certified-operators-8ll66\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.967254 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-utilities\") pod \"certified-operators-8ll66\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:03 crc kubenswrapper[5119]: E0220 00:12:03.967391 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:04.467367864 +0000 UTC m=+106.446332156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:03 crc kubenswrapper[5119]: I0220 00:12:03.976623 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-catalog-content\") pod \"certified-operators-8ll66\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.003422 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z2qsm"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.040527 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqzpb\" (UniqueName: \"kubernetes.io/projected/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-kube-api-access-hqzpb\") pod \"certified-operators-8ll66\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.048768 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2qsm"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.048908 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.054310 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.070109 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:04 crc kubenswrapper[5119]: E0220 00:12:04.070529 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:04.570513277 +0000 UTC m=+106.549477559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.139282 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.173412 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.173627 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2n5n\" (UniqueName: \"kubernetes.io/projected/d1da415e-215f-4b73-b5b5-36a8c7e68fda-kube-api-access-j2n5n\") pod \"community-operators-z2qsm\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.173677 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-catalog-content\") pod \"community-operators-z2qsm\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.173731 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-utilities\") pod \"community-operators-z2qsm\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:04 crc kubenswrapper[5119]: E0220 00:12:04.174018 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:04.674000588 +0000 UTC m=+106.652964870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.186374 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zk67r"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.200078 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zk67r"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.200174 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-j8mqx" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.200312 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.278914 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-catalog-content\") pod \"community-operators-z2qsm\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.279189 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-catalog-content\") pod \"certified-operators-zk67r\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.279318 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7r6z\" (UniqueName: \"kubernetes.io/projected/26fedbc9-e967-46be-9bd2-00822aa128a5-kube-api-access-n7r6z\") pod \"certified-operators-zk67r\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.279417 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-utilities\") pod \"certified-operators-zk67r\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.279578 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-utilities\") pod \"community-operators-z2qsm\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.279872 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j2n5n\" (UniqueName: \"kubernetes.io/projected/d1da415e-215f-4b73-b5b5-36a8c7e68fda-kube-api-access-j2n5n\") pod \"community-operators-z2qsm\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.279998 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:04 crc kubenswrapper[5119]: E0220 00:12:04.280464 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:04.780449009 +0000 UTC m=+106.759413301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.281651 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-utilities\") pod \"community-operators-z2qsm\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.281716 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-catalog-content\") pod \"community-operators-z2qsm\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.313857 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4krw4"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.323649 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2n5n\" (UniqueName: \"kubernetes.io/projected/d1da415e-215f-4b73-b5b5-36a8c7e68fda-kube-api-access-j2n5n\") pod \"community-operators-z2qsm\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.336410 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.339806 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4krw4"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.380723 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.380947 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-catalog-content\") pod \"community-operators-4krw4\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.381009 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-utilities\") pod \"community-operators-4krw4\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.381050 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-catalog-content\") pod \"certified-operators-zk67r\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.381084 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n7r6z\" (UniqueName: \"kubernetes.io/projected/26fedbc9-e967-46be-9bd2-00822aa128a5-kube-api-access-n7r6z\") pod \"certified-operators-zk67r\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.381104 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-utilities\") pod \"certified-operators-zk67r\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.381127 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rhj6\" (UniqueName: \"kubernetes.io/projected/8c54e6d8-4a07-479d-aec6-81085d348561-kube-api-access-6rhj6\") pod \"community-operators-4krw4\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:04 crc kubenswrapper[5119]: E0220 00:12:04.381432 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:04.881397892 +0000 UTC m=+106.860362184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.381556 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-catalog-content\") pod \"certified-operators-zk67r\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.382005 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-utilities\") pod \"certified-operators-zk67r\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.412526 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.413914 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.425287 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7r6z\" (UniqueName: \"kubernetes.io/projected/26fedbc9-e967-46be-9bd2-00822aa128a5-kube-api-access-n7r6z\") pod \"certified-operators-zk67r\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.474868 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-gmfcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:12:04 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Feb 20 00:12:04 crc kubenswrapper[5119]: [+]process-running ok Feb 20 00:12:04 crc kubenswrapper[5119]: healthz check failed Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.474921 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podUID="3a805538-2592-40cf-9131-444b1c0f3cbb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.488664 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-catalog-content\") pod \"community-operators-4krw4\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.488731 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.488763 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-utilities\") pod \"community-operators-4krw4\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.488834 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6rhj6\" (UniqueName: \"kubernetes.io/projected/8c54e6d8-4a07-479d-aec6-81085d348561-kube-api-access-6rhj6\") pod \"community-operators-4krw4\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.489483 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-catalog-content\") pod \"community-operators-4krw4\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:04 crc kubenswrapper[5119]: E0220 00:12:04.489764 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:04.989751305 +0000 UTC m=+106.968715597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.489987 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-utilities\") pod \"community-operators-4krw4\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.530497 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rhj6\" (UniqueName: \"kubernetes.io/projected/8c54e6d8-4a07-479d-aec6-81085d348561-kube-api-access-6rhj6\") pod \"community-operators-4krw4\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.542249 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.557118 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4mff9"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.589767 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:04 crc kubenswrapper[5119]: E0220 00:12:04.590141 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:05.090122141 +0000 UTC m=+107.069086433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.652470 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.692333 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:04 crc kubenswrapper[5119]: E0220 00:12:04.692806 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:05.192792002 +0000 UTC m=+107.171756294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.705666 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" event={"ID":"bcdbee0b-45bb-462c-aac5-ccb96d5b814b","Type":"ContainerStarted","Data":"da295c3421409e96538fb7a400abcf52373238955c25be6cedda39c129afb1ab"} Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.713121 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qg8t8"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.716329 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" event={"ID":"3ac40fa4-748c-4728-8297-697ebf1bf757","Type":"ContainerStarted","Data":"1eafbc8e9558b69ecf343849d497f6c681a2228b79f33ef6754944c92ac35ecc"} Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.719728 5119 generic.go:358] "Generic (PLEG): container finished" podID="863693a2-ad89-4568-abd2-d96f7f9db45e" containerID="d2e3de79c863086e324c570ca8a8042dda6949ba8695039184321c2f41b07bfc" exitCode=0 Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.719835 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" event={"ID":"863693a2-ad89-4568-abd2-d96f7f9db45e","Type":"ContainerDied","Data":"d2e3de79c863086e324c570ca8a8042dda6949ba8695039184321c2f41b07bfc"} Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.730753 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.752475 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-sn42f"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.756734 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nglnc" event={"ID":"374127c8-2589-49b2-9945-92cecb2993de","Type":"ContainerStarted","Data":"8032e40b5c324f3192413fa4094ec2ebc3974adad85ae8c9d9d87961e26a504b"} Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.756759 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nglnc" event={"ID":"374127c8-2589-49b2-9945-92cecb2993de","Type":"ContainerStarted","Data":"896bc02667f53c1e921253c6db049ba0f0a6ad118d2ecb5c8a63a53e6e1f9859"} Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.761830 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4mff9" event={"ID":"965b8822-0f09-43d5-a864-775966130c7d","Type":"ContainerStarted","Data":"0fcd07560fd68d33d8c787d09127140ec236f1cafd6d1c2d70922d61afdbfb25"} Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.766249 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-whcrz"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.793565 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.794302 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nglnc" podStartSLOduration=8.794284379 podStartE2EDuration="8.794284379s" podCreationTimestamp="2026-02-20 00:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:04.793270921 +0000 UTC m=+106.772235213" watchObservedRunningTime="2026-02-20 00:12:04.794284379 +0000 UTC m=+106.773248671" Feb 20 00:12:04 crc kubenswrapper[5119]: E0220 00:12:04.794841 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:05.294801232 +0000 UTC m=+107.273765524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.804877 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-8mw6h"] Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.805499 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" event={"ID":"97ee1eef-ee44-442d-ab55-8ae1d6bd364a","Type":"ContainerStarted","Data":"ad2b98f85313d7dd373eeae0181083e00aad4684b75a262816e4c226fc0097cf"} Feb 20 00:12:04 crc kubenswrapper[5119]: W0220 00:12:04.824002 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod760dbf42_7c22_4f3b_926d_88e1ce4e2963.slice/crio-12a1bf382683c4a08bff934b6be8c0c052d4e647662bee72c7180b6d0c7bf9ca WatchSource:0}: Error finding container 12a1bf382683c4a08bff934b6be8c0c052d4e647662bee72c7180b6d0c7bf9ca: Status 404 returned error can't find the container with id 12a1bf382683c4a08bff934b6be8c0c052d4e647662bee72c7180b6d0c7bf9ca Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.836935 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" event={"ID":"58dea456-427d-4599-a6b3-9ea369e332b2","Type":"ContainerStarted","Data":"fc68f26b24c1d2109ae26714d6bc524e187df4b6cfa604b619a794173b5506ae"} Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.843710 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-zshjz" podStartSLOduration=85.843689736 podStartE2EDuration="1m25.843689736s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:04.84306703 +0000 UTC m=+106.822031342" watchObservedRunningTime="2026-02-20 00:12:04.843689736 +0000 UTC m=+106.822654028" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.908967 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.931485 5119 scope.go:117] "RemoveContainer" containerID="f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf" Feb 20 00:12:04 crc kubenswrapper[5119]: E0220 00:12:04.932330 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:12:04 crc kubenswrapper[5119]: E0220 00:12:04.944606 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:05.444581958 +0000 UTC m=+107.423546250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:04 crc kubenswrapper[5119]: I0220 00:12:04.986576 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5"] Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.017996 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:05 crc kubenswrapper[5119]: E0220 00:12:05.018595 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:05.518575577 +0000 UTC m=+107.497539869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.031019 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8dmqk"] Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.059458 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2qsm"] Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.076705 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8ll66"] Feb 20 00:12:05 crc kubenswrapper[5119]: W0220 00:12:05.101209 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6456d4_f2fc_4c32_82bf_9c58cfa87699.slice/crio-15871269f18ead6ce2f1f717e072e931d402e18a06ae158f7d4ff4cc13281cbe WatchSource:0}: Error finding container 15871269f18ead6ce2f1f717e072e931d402e18a06ae158f7d4ff4cc13281cbe: Status 404 returned error can't find the container with id 15871269f18ead6ce2f1f717e072e931d402e18a06ae158f7d4ff4cc13281cbe Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.123072 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:05 crc kubenswrapper[5119]: E0220 00:12:05.123507 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:05.623491186 +0000 UTC m=+107.602455478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.148205 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zk67r"] Feb 20 00:12:05 crc kubenswrapper[5119]: W0220 00:12:05.178228 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26fedbc9_e967_46be_9bd2_00822aa128a5.slice/crio-1cd8b24504d4cb2c60132bef0bb1778e86d1bb4d52bf7528c5d28ad6533c10be WatchSource:0}: Error finding container 1cd8b24504d4cb2c60132bef0bb1778e86d1bb4d52bf7528c5d28ad6533c10be: Status 404 returned error can't find the container with id 1cd8b24504d4cb2c60132bef0bb1778e86d1bb4d52bf7528c5d28ad6533c10be Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.224065 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:05 crc kubenswrapper[5119]: E0220 00:12:05.224604 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:05.724585783 +0000 UTC m=+107.703550075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.325368 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:05 crc kubenswrapper[5119]: E0220 00:12:05.326019 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:05.825996089 +0000 UTC m=+107.804960381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.426934 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:05 crc kubenswrapper[5119]: E0220 00:12:05.427425 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:05.927405524 +0000 UTC m=+107.906369816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.466903 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4krw4"] Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.469742 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-gmfcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:12:05 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Feb 20 00:12:05 crc kubenswrapper[5119]: [+]process-running ok Feb 20 00:12:05 crc kubenswrapper[5119]: healthz check failed Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.469801 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podUID="3a805538-2592-40cf-9131-444b1c0f3cbb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.528254 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:05 crc kubenswrapper[5119]: E0220 00:12:05.528639 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:06.028615845 +0000 UTC m=+108.007580137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.630385 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:05 crc kubenswrapper[5119]: E0220 00:12:05.630719 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:06.130695488 +0000 UTC m=+108.109659780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.732568 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:05 crc kubenswrapper[5119]: E0220 00:12:05.732934 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:06.232919685 +0000 UTC m=+108.211883977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.767604 5119 ???:1] "http: TLS handshake error from 192.168.126.11:57526: no serving certificate available for the kubelet" Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.834617 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:05 crc kubenswrapper[5119]: E0220 00:12:05.834748 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:06.334714861 +0000 UTC m=+108.313679153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.835205 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:05 crc kubenswrapper[5119]: E0220 00:12:05.835775 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:06.33576523 +0000 UTC m=+108.314729532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.857449 5119 generic.go:358] "Generic (PLEG): container finished" podID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" containerID="0001f7f748f11423cee3d39b4042d8523499cd575e40befff668db74f1fc5630" exitCode=0 Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.857518 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2qsm" event={"ID":"d1da415e-215f-4b73-b5b5-36a8c7e68fda","Type":"ContainerDied","Data":"0001f7f748f11423cee3d39b4042d8523499cd575e40befff668db74f1fc5630"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.857569 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2qsm" event={"ID":"d1da415e-215f-4b73-b5b5-36a8c7e68fda","Type":"ContainerStarted","Data":"4b61b0c49f0ac43b6616d7360813d0df846204c4a37baaf3d3a5f936fe37e389"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.866249 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8dmqk" event={"ID":"202de310-1c98-4a10-9ea6-b32d9debbd8c","Type":"ContainerStarted","Data":"120291f2d7694b2ce9053842ca9c21ee2dbcf98cc754e371299aa97239a7f75b"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.866298 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8dmqk" event={"ID":"202de310-1c98-4a10-9ea6-b32d9debbd8c","Type":"ContainerStarted","Data":"61670313a16c5160192e20cefd45cde2e9f2b1bded00f81a5e1ad08b89911c8c"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.868229 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-sn42f" event={"ID":"760dbf42-7c22-4f3b-926d-88e1ce4e2963","Type":"ContainerStarted","Data":"5c7b6dc295e0eb77c786e08ac841d4cce4afa258c03688abeb2d34cdbf324edb"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.868253 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-sn42f" event={"ID":"760dbf42-7c22-4f3b-926d-88e1ce4e2963","Type":"ContainerStarted","Data":"12a1bf382683c4a08bff934b6be8c0c052d4e647662bee72c7180b6d0c7bf9ca"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.871345 5119 generic.go:358] "Generic (PLEG): container finished" podID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" containerID="ff96dbb5c23f564c7100fb44d6ea14d6bf6be5b9d43b0f29dbea49efb03a6273" exitCode=0 Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.871449 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ll66" event={"ID":"7a6456d4-f2fc-4c32-82bf-9c58cfa87699","Type":"ContainerDied","Data":"ff96dbb5c23f564c7100fb44d6ea14d6bf6be5b9d43b0f29dbea49efb03a6273"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.871487 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ll66" event={"ID":"7a6456d4-f2fc-4c32-82bf-9c58cfa87699","Type":"ContainerStarted","Data":"15871269f18ead6ce2f1f717e072e931d402e18a06ae158f7d4ff4cc13281cbe"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.872953 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" event={"ID":"bcdbee0b-45bb-462c-aac5-ccb96d5b814b","Type":"ContainerStarted","Data":"d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.875062 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" event={"ID":"3ac40fa4-748c-4728-8297-697ebf1bf757","Type":"ContainerStarted","Data":"d6249191ca8ee4494847268b1619c5fb378fb29dfc1b528c857bfdb24224885a"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.875100 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" event={"ID":"3ac40fa4-748c-4728-8297-697ebf1bf757","Type":"ContainerStarted","Data":"6dd38abf2fa9467b766c824cbc3bc921f7994733fad0505b9926cc0b928309e5"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.877319 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" event={"ID":"1a0d9df3-9b02-4473-bf24-e9af2df871e6","Type":"ContainerStarted","Data":"03770dd2392b137c5b3a1f95d23035f4ba40abdbdbf9d4484869de89b8b8659d"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.877386 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" event={"ID":"1a0d9df3-9b02-4473-bf24-e9af2df871e6","Type":"ContainerStarted","Data":"2324a2a1c5e540d7f0d4552562ce92cd5e7b771caa67f26456071d2b6b81b34d"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.883016 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" event={"ID":"58dea456-427d-4599-a6b3-9ea369e332b2","Type":"ContainerStarted","Data":"c672bfa06977ad70c22e16c5744a57da40c199ec639652bd27b8485d71f56ffd"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.887395 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zk67r" event={"ID":"26fedbc9-e967-46be-9bd2-00822aa128a5","Type":"ContainerStarted","Data":"a1055600a504f918cfaf9eb422d6d77c30577ba34c68084254e7d3d399465774"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.887443 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zk67r" event={"ID":"26fedbc9-e967-46be-9bd2-00822aa128a5","Type":"ContainerStarted","Data":"1cd8b24504d4cb2c60132bef0bb1778e86d1bb4d52bf7528c5d28ad6533c10be"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.891856 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-8mw6h" event={"ID":"8492563b-3d36-4b7a-a88b-ffd27fd60eb1","Type":"ContainerStarted","Data":"7e61c33d8882d197f974c49c46535c40c54dd9f41c0b92de4da934e7933b7fcf"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.891919 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-8mw6h" event={"ID":"8492563b-3d36-4b7a-a88b-ffd27fd60eb1","Type":"ContainerStarted","Data":"ff74c080196e5c23d76111d56b4bb51df8d985cf50624f634a5b40908ab1d8d9"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.894423 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qg8t8" event={"ID":"82841644-be5e-4c71-a968-887f117d7b66","Type":"ContainerStarted","Data":"0728ced14b8c82841a4893ac7a80e33fc60f83cbdc12f608666c1e41090ba889"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.894492 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qg8t8" event={"ID":"82841644-be5e-4c71-a968-887f117d7b66","Type":"ContainerStarted","Data":"3ef434d17c3b33abc9fb098c193dc0181c54fcedbca051a63e4c4420544ba5ad"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.896761 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-8mw6h" Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.896927 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.899300 5119 patch_prober.go:28] interesting pod/downloads-747b44746d-8mw6h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.899386 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-8mw6h" podUID="8492563b-3d36-4b7a-a88b-ffd27fd60eb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.899498 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" event={"ID":"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c","Type":"ContainerStarted","Data":"955b3aafdf4f9d14bc69af236a77d9b1eeeb39ed8c2c27be8faaa30f00380a3a"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.899567 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" event={"ID":"f09adde3-d2ec-4383-ae3f-1eb9491e0b3c","Type":"ContainerStarted","Data":"5aa2e8ebea804cf2851f9215ef218b0d40df74f741779ecf6aac44b67173abcc"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.920794 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4krw4" event={"ID":"8c54e6d8-4a07-479d-aec6-81085d348561","Type":"ContainerStarted","Data":"0d4cb6fda7910dec81e76cb4669e2e90e7dc5b63563f9a48dfa1f50a7f4518a4"} Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.921256 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rprkl"] Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.933204 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rprkl"] Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.933647 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.938935 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:05 crc kubenswrapper[5119]: E0220 00:12:05.940214 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:06.440195346 +0000 UTC m=+108.419159628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.943795 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.963170 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:05 crc kubenswrapper[5119]: I0220 00:12:05.989760 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6kwxp" podStartSLOduration=86.989734048 podStartE2EDuration="1m26.989734048s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:05.984202199 +0000 UTC m=+107.963166491" watchObservedRunningTime="2026-02-20 00:12:05.989734048 +0000 UTC m=+107.968698340" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.014217 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-8mw6h" podStartSLOduration=87.014197115 podStartE2EDuration="1m27.014197115s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:06.009747266 +0000 UTC m=+107.988711568" watchObservedRunningTime="2026-02-20 00:12:06.014197115 +0000 UTC m=+107.993161407" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.033484 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-qg8t8" podStartSLOduration=87.033464623 podStartE2EDuration="1m27.033464623s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:06.029406654 +0000 UTC m=+108.008370946" watchObservedRunningTime="2026-02-20 00:12:06.033464623 +0000 UTC m=+108.012428915" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.040656 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.040743 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-utilities\") pod \"redhat-marketplace-rprkl\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.040807 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-catalog-content\") pod \"redhat-marketplace-rprkl\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.040912 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fpdk\" (UniqueName: \"kubernetes.io/projected/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-kube-api-access-7fpdk\") pod \"redhat-marketplace-rprkl\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:06 crc kubenswrapper[5119]: E0220 00:12:06.042276 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:06.542257679 +0000 UTC m=+108.521221971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.045798 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-sn42f" podStartSLOduration=10.045780174 podStartE2EDuration="10.045780174s" podCreationTimestamp="2026-02-20 00:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:06.044702205 +0000 UTC m=+108.023666497" watchObservedRunningTime="2026-02-20 00:12:06.045780174 +0000 UTC m=+108.024744466" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.078214 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" podStartSLOduration=10.078187415 podStartE2EDuration="10.078187415s" podCreationTimestamp="2026-02-20 00:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:06.068466234 +0000 UTC m=+108.047430526" watchObservedRunningTime="2026-02-20 00:12:06.078187415 +0000 UTC m=+108.057151727" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.094511 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wwrm5" podStartSLOduration=87.094491714 podStartE2EDuration="1m27.094491714s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:06.093319022 +0000 UTC m=+108.072283334" watchObservedRunningTime="2026-02-20 00:12:06.094491714 +0000 UTC m=+108.073456006" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.148011 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.148366 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-utilities\") pod \"redhat-marketplace-rprkl\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.148431 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-catalog-content\") pod \"redhat-marketplace-rprkl\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.148475 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7fpdk\" (UniqueName: \"kubernetes.io/projected/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-kube-api-access-7fpdk\") pod \"redhat-marketplace-rprkl\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:06 crc kubenswrapper[5119]: E0220 00:12:06.150483 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:06.650444858 +0000 UTC m=+108.629409150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.150928 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-utilities\") pod \"redhat-marketplace-rprkl\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.152063 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-catalog-content\") pod \"redhat-marketplace-rprkl\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.168956 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-hhj8g" podStartSLOduration=87.168934084 podStartE2EDuration="1m27.168934084s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:06.136872373 +0000 UTC m=+108.115836665" watchObservedRunningTime="2026-02-20 00:12:06.168934084 +0000 UTC m=+108.147898376" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.181674 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fpdk\" (UniqueName: \"kubernetes.io/projected/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-kube-api-access-7fpdk\") pod \"redhat-marketplace-rprkl\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.246054 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.249844 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:06 crc kubenswrapper[5119]: E0220 00:12:06.250417 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:06.750402894 +0000 UTC m=+108.729367186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.313186 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hs997"] Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.317573 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="863693a2-ad89-4568-abd2-d96f7f9db45e" containerName="collect-profiles" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.317680 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="863693a2-ad89-4568-abd2-d96f7f9db45e" containerName="collect-profiles" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.317922 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="863693a2-ad89-4568-abd2-d96f7f9db45e" containerName="collect-profiles" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.329654 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.339314 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hs997"] Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.344916 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.350777 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/863693a2-ad89-4568-abd2-d96f7f9db45e-secret-volume\") pod \"863693a2-ad89-4568-abd2-d96f7f9db45e\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.350963 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zptmf\" (UniqueName: \"kubernetes.io/projected/863693a2-ad89-4568-abd2-d96f7f9db45e-kube-api-access-zptmf\") pod \"863693a2-ad89-4568-abd2-d96f7f9db45e\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.351169 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.351225 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/863693a2-ad89-4568-abd2-d96f7f9db45e-config-volume\") pod \"863693a2-ad89-4568-abd2-d96f7f9db45e\" (UID: \"863693a2-ad89-4568-abd2-d96f7f9db45e\") " Feb 20 00:12:06 crc kubenswrapper[5119]: E0220 00:12:06.355304 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:06.855257502 +0000 UTC m=+108.834221794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.357260 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863693a2-ad89-4568-abd2-d96f7f9db45e-config-volume" (OuterVolumeSpecName: "config-volume") pod "863693a2-ad89-4568-abd2-d96f7f9db45e" (UID: "863693a2-ad89-4568-abd2-d96f7f9db45e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.363683 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863693a2-ad89-4568-abd2-d96f7f9db45e-kube-api-access-zptmf" (OuterVolumeSpecName: "kube-api-access-zptmf") pod "863693a2-ad89-4568-abd2-d96f7f9db45e" (UID: "863693a2-ad89-4568-abd2-d96f7f9db45e"). InnerVolumeSpecName "kube-api-access-zptmf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.383742 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863693a2-ad89-4568-abd2-d96f7f9db45e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "863693a2-ad89-4568-abd2-d96f7f9db45e" (UID: "863693a2-ad89-4568-abd2-d96f7f9db45e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.455479 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-utilities\") pod \"redhat-marketplace-hs997\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.455569 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.455601 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-catalog-content\") pod \"redhat-marketplace-hs997\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.455643 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zqsf\" (UniqueName: \"kubernetes.io/projected/d30fdb9a-8c75-4713-be06-be83ca5aa897-kube-api-access-5zqsf\") pod \"redhat-marketplace-hs997\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.455792 5119 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/863693a2-ad89-4568-abd2-d96f7f9db45e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.455809 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zptmf\" (UniqueName: \"kubernetes.io/projected/863693a2-ad89-4568-abd2-d96f7f9db45e-kube-api-access-zptmf\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.455818 5119 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/863693a2-ad89-4568-abd2-d96f7f9db45e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:06 crc kubenswrapper[5119]: E0220 00:12:06.456246 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:06.956223285 +0000 UTC m=+108.935187577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.476884 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-gmfcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:12:06 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Feb 20 00:12:06 crc kubenswrapper[5119]: [+]process-running ok Feb 20 00:12:06 crc kubenswrapper[5119]: healthz check failed Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.476972 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podUID="3a805538-2592-40cf-9131-444b1c0f3cbb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.556800 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.557055 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-utilities\") pod \"redhat-marketplace-hs997\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.557120 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-catalog-content\") pod \"redhat-marketplace-hs997\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.557164 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5zqsf\" (UniqueName: \"kubernetes.io/projected/d30fdb9a-8c75-4713-be06-be83ca5aa897-kube-api-access-5zqsf\") pod \"redhat-marketplace-hs997\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:06 crc kubenswrapper[5119]: E0220 00:12:06.557636 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:07.0576172 +0000 UTC m=+109.036581492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.557984 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-utilities\") pod \"redhat-marketplace-hs997\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.558197 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-catalog-content\") pod \"redhat-marketplace-hs997\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.585666 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zqsf\" (UniqueName: \"kubernetes.io/projected/d30fdb9a-8c75-4713-be06-be83ca5aa897-kube-api-access-5zqsf\") pod \"redhat-marketplace-hs997\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.659107 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:06 crc kubenswrapper[5119]: E0220 00:12:06.659531 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:07.159516739 +0000 UTC m=+109.138481031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.666980 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.685949 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7c2z6"] Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.722969 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rprkl"] Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.761450 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:06 crc kubenswrapper[5119]: E0220 00:12:06.761973 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:07.261952562 +0000 UTC m=+109.240916854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.865740 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:06 crc kubenswrapper[5119]: E0220 00:12:06.866482 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:07.366453871 +0000 UTC m=+109.345418343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.927148 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gr894"] Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.946034 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gr894"] Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.952040 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.959789 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.967288 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.968058 5119 generic.go:358] "Generic (PLEG): container finished" podID="8c54e6d8-4a07-479d-aec6-81085d348561" containerID="f17d82722af616c6de0474b99193fe6da41673000c558716352d4270131bf90c" exitCode=0 Feb 20 00:12:06 crc kubenswrapper[5119]: E0220 00:12:06.968192 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:07.467913377 +0000 UTC m=+109.446877669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.968107 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4krw4" event={"ID":"8c54e6d8-4a07-479d-aec6-81085d348561","Type":"ContainerDied","Data":"f17d82722af616c6de0474b99193fe6da41673000c558716352d4270131bf90c"} Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.968771 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:06 crc kubenswrapper[5119]: E0220 00:12:06.969228 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:07.469218072 +0000 UTC m=+109.448182364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.976920 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8dmqk" event={"ID":"202de310-1c98-4a10-9ea6-b32d9debbd8c","Type":"ContainerStarted","Data":"61470e5b4ab03f7cacad522187c78b4c7ac02551bb0f721d535af2e933ef3963"} Feb 20 00:12:06 crc kubenswrapper[5119]: I0220 00:12:06.977816 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.013866 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" event={"ID":"1a0d9df3-9b02-4473-bf24-e9af2df871e6","Type":"ContainerStarted","Data":"4efbb4b0cec191749bb453365d4ea3c8ac7c4069b3be75a2c92035537b3266ff"} Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.036890 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" event={"ID":"863693a2-ad89-4568-abd2-d96f7f9db45e","Type":"ContainerDied","Data":"b426dc518f52e15da9f05b392806406d0bda9cfd93b3e09033d7c2cfb5afe36a"} Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.037243 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b426dc518f52e15da9f05b392806406d0bda9cfd93b3e09033d7c2cfb5afe36a" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.037574 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-fwrs2" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.059978 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.085459 5119 generic.go:358] "Generic (PLEG): container finished" podID="26fedbc9-e967-46be-9bd2-00822aa128a5" containerID="a1055600a504f918cfaf9eb422d6d77c30577ba34c68084254e7d3d399465774" exitCode=0 Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.085609 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zk67r" event={"ID":"26fedbc9-e967-46be-9bd2-00822aa128a5","Type":"ContainerDied","Data":"a1055600a504f918cfaf9eb422d6d77c30577ba34c68084254e7d3d399465774"} Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.086757 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8dmqk" podStartSLOduration=11.08672042 podStartE2EDuration="11.08672042s" podCreationTimestamp="2026-02-20 00:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:07.0580532 +0000 UTC m=+109.037017492" watchObservedRunningTime="2026-02-20 00:12:07.08672042 +0000 UTC m=+109.065684712" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.087615 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.087965 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-catalog-content\") pod \"redhat-operators-gr894\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:07 crc kubenswrapper[5119]: E0220 00:12:07.088115 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:07.588074727 +0000 UTC m=+109.567039029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.088259 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-utilities\") pod \"redhat-operators-gr894\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.088651 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42l4k\" (UniqueName: \"kubernetes.io/projected/fdd71310-68b1-4580-8ea5-053669823d3c-kube-api-access-42l4k\") pod \"redhat-operators-gr894\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.088774 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:07 crc kubenswrapper[5119]: E0220 00:12:07.089253 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:07.589235618 +0000 UTC m=+109.568199910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.101262 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-8qbnc" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.101444 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rprkl" event={"ID":"14397656-dc1d-4bf5-a1f2-e9b79fab3e53","Type":"ContainerStarted","Data":"620d10ec41ff9ae066b4faea8b611af500a3e0a517d8e38b5f25c86ef780e4a5"} Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.102021 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.103049 5119 patch_prober.go:28] interesting pod/downloads-747b44746d-8mw6h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.103199 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-8mw6h" podUID="8492563b-3d36-4b7a-a88b-ffd27fd60eb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.113947 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-whcrz" podStartSLOduration=88.113924892 podStartE2EDuration="1m28.113924892s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:07.109784451 +0000 UTC m=+109.088748753" watchObservedRunningTime="2026-02-20 00:12:07.113924892 +0000 UTC m=+109.092889184" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.191815 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.192695 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-catalog-content\") pod \"redhat-operators-gr894\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.192830 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-utilities\") pod \"redhat-operators-gr894\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.192976 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-42l4k\" (UniqueName: \"kubernetes.io/projected/fdd71310-68b1-4580-8ea5-053669823d3c-kube-api-access-42l4k\") pod \"redhat-operators-gr894\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.193475 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-utilities\") pod \"redhat-operators-gr894\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:07 crc kubenswrapper[5119]: E0220 00:12:07.194589 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:07.694558919 +0000 UTC m=+109.673523211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.195820 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-catalog-content\") pod \"redhat-operators-gr894\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.236051 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-42l4k\" (UniqueName: \"kubernetes.io/projected/fdd71310-68b1-4580-8ea5-053669823d3c-kube-api-access-42l4k\") pod \"redhat-operators-gr894\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.295478 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:07 crc kubenswrapper[5119]: E0220 00:12:07.296033 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:07.796011856 +0000 UTC m=+109.774976138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.296993 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hs997"] Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.312065 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.317134 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rg995"] Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.327399 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.346945 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rg995"] Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.402325 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.402673 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-utilities\") pod \"redhat-operators-rg995\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.402709 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-catalog-content\") pod \"redhat-operators-rg995\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.402743 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qws5\" (UniqueName: \"kubernetes.io/projected/651bc74f-974d-4562-b93a-016f18443fb6-kube-api-access-2qws5\") pod \"redhat-operators-rg995\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:07 crc kubenswrapper[5119]: E0220 00:12:07.402910 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:07.902887738 +0000 UTC m=+109.881852030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.407306 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-qg8t8" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.469716 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-gmfcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:12:07 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Feb 20 00:12:07 crc kubenswrapper[5119]: [+]process-running ok Feb 20 00:12:07 crc kubenswrapper[5119]: healthz check failed Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.469794 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podUID="3a805538-2592-40cf-9131-444b1c0f3cbb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.504905 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-utilities\") pod \"redhat-operators-rg995\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.504971 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-catalog-content\") pod \"redhat-operators-rg995\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.505016 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.505045 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2qws5\" (UniqueName: \"kubernetes.io/projected/651bc74f-974d-4562-b93a-016f18443fb6-kube-api-access-2qws5\") pod \"redhat-operators-rg995\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.505958 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-utilities\") pod \"redhat-operators-rg995\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.506188 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-catalog-content\") pod \"redhat-operators-rg995\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:07 crc kubenswrapper[5119]: E0220 00:12:07.506509 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:08.006493002 +0000 UTC m=+109.985457294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.528455 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qws5\" (UniqueName: \"kubernetes.io/projected/651bc74f-974d-4562-b93a-016f18443fb6-kube-api-access-2qws5\") pod \"redhat-operators-rg995\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.612790 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:07 crc kubenswrapper[5119]: E0220 00:12:07.614141 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:08.114118205 +0000 UTC m=+110.093082497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.719681 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:07 crc kubenswrapper[5119]: E0220 00:12:07.720153 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:08.220137945 +0000 UTC m=+110.199102237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.751942 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.821106 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:07 crc kubenswrapper[5119]: E0220 00:12:07.821526 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:08.321501589 +0000 UTC m=+110.300465881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.842687 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gr894"] Feb 20 00:12:07 crc kubenswrapper[5119]: I0220 00:12:07.923035 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:07 crc kubenswrapper[5119]: E0220 00:12:07.923411 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:08.423395388 +0000 UTC m=+110.402359680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.023917 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.024424 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:08.524404442 +0000 UTC m=+110.503368734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.122054 5119 generic.go:358] "Generic (PLEG): container finished" podID="d30fdb9a-8c75-4713-be06-be83ca5aa897" containerID="48f11e597c2ec98d0c0bafbd77745a6282a5efab77ab6f9933598ad724feae07" exitCode=0 Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.122194 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs997" event={"ID":"d30fdb9a-8c75-4713-be06-be83ca5aa897","Type":"ContainerDied","Data":"48f11e597c2ec98d0c0bafbd77745a6282a5efab77ab6f9933598ad724feae07"} Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.122231 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs997" event={"ID":"d30fdb9a-8c75-4713-be06-be83ca5aa897","Type":"ContainerStarted","Data":"852f2fad1f5f946df2fddc9be03c76cfd4a7f550b8f8abd0d277cedbe3ec29ac"} Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.126678 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.127201 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:08.627185614 +0000 UTC m=+110.606149906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.144480 5119 generic.go:358] "Generic (PLEG): container finished" podID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" containerID="503d8075599f7637f248bdf3649f4cc9be53dd854aae0d9ecb47299b69c3d980" exitCode=0 Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.147683 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rprkl" event={"ID":"14397656-dc1d-4bf5-a1f2-e9b79fab3e53","Type":"ContainerDied","Data":"503d8075599f7637f248bdf3649f4cc9be53dd854aae0d9ecb47299b69c3d980"} Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.162815 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr894" event={"ID":"fdd71310-68b1-4580-8ea5-053669823d3c","Type":"ContainerStarted","Data":"610488e946b5f11ce0c5cff2a8079b2c9392893691db51ed0d92fbf416b29265"} Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.163356 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" podUID="bcdbee0b-45bb-462c-aac5-ccb96d5b814b" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" gracePeriod=30 Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.168741 5119 patch_prober.go:28] interesting pod/downloads-747b44746d-8mw6h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.168805 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-8mw6h" podUID="8492563b-3d36-4b7a-a88b-ffd27fd60eb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.229327 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.235762 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:08.735716771 +0000 UTC m=+110.714681063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.244629 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.246262 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:08.746227803 +0000 UTC m=+110.725192135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.317428 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rg995"] Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.349148 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.351096 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:08.851073461 +0000 UTC m=+110.830037753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.451624 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.452076 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:08.952054975 +0000 UTC m=+110.931019267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.472015 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-gmfcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:12:08 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Feb 20 00:12:08 crc kubenswrapper[5119]: [+]process-running ok Feb 20 00:12:08 crc kubenswrapper[5119]: healthz check failed Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.472097 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podUID="3a805538-2592-40cf-9131-444b1c0f3cbb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.503007 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.509486 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.511767 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.512256 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.528308 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.559518 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.559668 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.059635077 +0000 UTC m=+111.038599369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.559778 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.560452 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.060432369 +0000 UTC m=+111.039396661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.663446 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.663888 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.663976 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.664177 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.164153216 +0000 UTC m=+111.143117508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.765957 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.766017 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.766072 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.767092 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.267073552 +0000 UTC m=+111.246037844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.767274 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.795826 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.830335 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.867927 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.868281 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.368262052 +0000 UTC m=+111.347226334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:08 crc kubenswrapper[5119]: I0220 00:12:08.969663 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:08 crc kubenswrapper[5119]: E0220 00:12:08.970042 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.470026977 +0000 UTC m=+111.448991269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.076341 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.076811 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.576772156 +0000 UTC m=+111.555736438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.077041 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.077518 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.577511326 +0000 UTC m=+111.556475618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.194060 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.194679 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.694659134 +0000 UTC m=+111.673623426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.205169 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4mff9" event={"ID":"965b8822-0f09-43d5-a864-775966130c7d","Type":"ContainerStarted","Data":"446550f662be7478c4ca2def3a8fbe9d23e03c3afa7cf58c88fcd1202bbad361"} Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.230257 5119 generic.go:358] "Generic (PLEG): container finished" podID="fdd71310-68b1-4580-8ea5-053669823d3c" containerID="dd9f7793677949b5a329e7142c48208817d1d4fd303eb55b3384eb40d8bc328b" exitCode=0 Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.230612 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr894" event={"ID":"fdd71310-68b1-4580-8ea5-053669823d3c","Type":"ContainerDied","Data":"dd9f7793677949b5a329e7142c48208817d1d4fd303eb55b3384eb40d8bc328b"} Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.235332 5119 generic.go:358] "Generic (PLEG): container finished" podID="651bc74f-974d-4562-b93a-016f18443fb6" containerID="e9fa4abb943c3b5b04fe0c6d3e09514c1617f35f9d94511341399e3e41e4cb91" exitCode=0 Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.235775 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rg995" event={"ID":"651bc74f-974d-4562-b93a-016f18443fb6","Type":"ContainerDied","Data":"e9fa4abb943c3b5b04fe0c6d3e09514c1617f35f9d94511341399e3e41e4cb91"} Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.235833 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rg995" event={"ID":"651bc74f-974d-4562-b93a-016f18443fb6","Type":"ContainerStarted","Data":"6a1904cd8a30066b17d2972c7f8b22d750d1e9f9f054166f3b6ee7b8a4045e84"} Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.299150 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.305055 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.805035351 +0000 UTC m=+111.783999643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.400108 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.400452 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.900411564 +0000 UTC m=+111.879375856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.400794 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.401452 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:09.901446232 +0000 UTC m=+111.880410524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.467911 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.476363 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-gmfcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:12:09 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Feb 20 00:12:09 crc kubenswrapper[5119]: [+]process-running ok Feb 20 00:12:09 crc kubenswrapper[5119]: healthz check failed Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.476416 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podUID="3a805538-2592-40cf-9131-444b1c0f3cbb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.503118 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.503432 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.003392751 +0000 UTC m=+111.982357043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.503772 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.504223 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.004215874 +0000 UTC m=+111.983180166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.592998 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.593078 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.597089 5119 patch_prober.go:28] interesting pod/console-64d44f6ddf-vz6mj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.597178 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-vz6mj" podUID="5ae023e2-1168-43b9-83b4-49ff02bb9ea4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.604721 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.604896 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.104867069 +0000 UTC m=+112.083831361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.605603 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.606041 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.106034241 +0000 UTC m=+112.084998533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.706279 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.707990 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.20796555 +0000 UTC m=+112.186929852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.748029 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.808645 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.809008 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.308991726 +0000 UTC m=+112.287956018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:09 crc kubenswrapper[5119]: I0220 00:12:09.910071 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:09 crc kubenswrapper[5119]: E0220 00:12:09.910627 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.410604116 +0000 UTC m=+112.389568408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.012069 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.012521 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.512503835 +0000 UTC m=+112.491468117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.113101 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.113498 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.613478599 +0000 UTC m=+112.592442881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.216563 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.216940 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.716923069 +0000 UTC m=+112.695887361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.252088 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d","Type":"ContainerStarted","Data":"12c376ec874c22b1ae2798dca3a1203eac742ba65a28d3ac346c240509344f78"} Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.318085 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.318482 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.818442277 +0000 UTC m=+112.797406569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.424123 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.425421 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:10.924817186 +0000 UTC m=+112.903781478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.469228 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-gmfcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:12:10 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Feb 20 00:12:10 crc kubenswrapper[5119]: [+]process-running ok Feb 20 00:12:10 crc kubenswrapper[5119]: healthz check failed Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.470311 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podUID="3a805538-2592-40cf-9131-444b1c0f3cbb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.532459 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.532806 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.032785008 +0000 UTC m=+113.011749300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.635334 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.636002 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.135978251 +0000 UTC m=+113.114942543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.736857 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.737093 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.237053388 +0000 UTC m=+113.216017680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.737641 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.738170 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.238161177 +0000 UTC m=+113.217125469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.839106 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.839344 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.339302426 +0000 UTC m=+113.318266718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.840021 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.840430 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.340414606 +0000 UTC m=+113.319378898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.933664 5119 ???:1] "http: TLS handshake error from 192.168.126.11:52218: no serving certificate available for the kubelet" Feb 20 00:12:10 crc kubenswrapper[5119]: I0220 00:12:10.941604 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:10 crc kubenswrapper[5119]: E0220 00:12:10.941904 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.441842162 +0000 UTC m=+113.420806454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.049420 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.049991 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.549971988 +0000 UTC m=+113.528936290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.151250 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.151472 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.651434005 +0000 UTC m=+113.630398297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.152252 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.152681 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.652661518 +0000 UTC m=+113.631625810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.253711 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.254535 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.7539234 +0000 UTC m=+113.732887682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.255307 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.255784 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.755773169 +0000 UTC m=+113.734737461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.285753 5119 generic.go:358] "Generic (PLEG): container finished" podID="1bd9bfc9-506a-44c3-9f9c-728cf75bf99d" containerID="11fd2642e0b04c3bd386889f057a81dbe24271ffd235091a7171acc1932c1a75" exitCode=0 Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.285958 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d","Type":"ContainerDied","Data":"11fd2642e0b04c3bd386889f057a81dbe24271ffd235091a7171acc1932c1a75"} Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.357171 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.357367 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.857334148 +0000 UTC m=+113.836298440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.358250 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.358888 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.85886124 +0000 UTC m=+113.837825562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.459841 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.460081 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.96005338 +0000 UTC m=+113.939017672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.460312 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.460728 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:11.960717177 +0000 UTC m=+113.939681469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.470884 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-gmfcd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:12:11 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Feb 20 00:12:11 crc kubenswrapper[5119]: [+]process-running ok Feb 20 00:12:11 crc kubenswrapper[5119]: healthz check failed Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.471012 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" podUID="3a805538-2592-40cf-9131-444b1c0f3cbb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.562315 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.562622 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.062582845 +0000 UTC m=+114.041547137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.564346 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.564868 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.064846826 +0000 UTC m=+114.043811118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.665636 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.665879 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.165841721 +0000 UTC m=+114.144806013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.666069 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.666470 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.166453087 +0000 UTC m=+114.145417379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.767231 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.767655 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.267635027 +0000 UTC m=+114.246599319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.872949 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.873447 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.373423159 +0000 UTC m=+114.352387451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:11 crc kubenswrapper[5119]: I0220 00:12:11.975416 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:11 crc kubenswrapper[5119]: E0220 00:12:11.977513 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.477492526 +0000 UTC m=+114.456456818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.081440 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:12 crc kubenswrapper[5119]: E0220 00:12:12.082061 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.582028906 +0000 UTC m=+114.560993198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.184179 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:12 crc kubenswrapper[5119]: E0220 00:12:12.185008 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.684899891 +0000 UTC m=+114.663864193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.230889 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.238812 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.242530 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.242650 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.243437 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.289123 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"6c050f1b-62ba-4f8f-9eab-696aaf960f00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.289711 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"6c050f1b-62ba-4f8f-9eab-696aaf960f00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.291056 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:12 crc kubenswrapper[5119]: E0220 00:12:12.291727 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.791702981 +0000 UTC m=+114.770667273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.393612 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:12 crc kubenswrapper[5119]: E0220 00:12:12.393859 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.893821626 +0000 UTC m=+114.872785918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.394399 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"6c050f1b-62ba-4f8f-9eab-696aaf960f00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.394437 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"6c050f1b-62ba-4f8f-9eab-696aaf960f00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.394582 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"6c050f1b-62ba-4f8f-9eab-696aaf960f00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.394675 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:12 crc kubenswrapper[5119]: E0220 00:12:12.395111 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.89508934 +0000 UTC m=+114.874053632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.433595 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"6c050f1b-62ba-4f8f-9eab-696aaf960f00\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.471282 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.474475 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-gmfcd" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.496205 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:12 crc kubenswrapper[5119]: E0220 00:12:12.497250 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:12.997177514 +0000 UTC m=+114.976141806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.589761 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.598805 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:12 crc kubenswrapper[5119]: E0220 00:12:12.600186 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.100170381 +0000 UTC m=+115.079134673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.701111 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:12 crc kubenswrapper[5119]: E0220 00:12:12.701364 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.20133655 +0000 UTC m=+115.180300832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.701660 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:12 crc kubenswrapper[5119]: E0220 00:12:12.702061 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.20203978 +0000 UTC m=+115.181004072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.709475 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.802809 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.802905 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kube-api-access\") pod \"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d\" (UID: \"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d\") " Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.803067 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kubelet-dir\") pod \"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d\" (UID: \"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d\") " Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.803332 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1bd9bfc9-506a-44c3-9f9c-728cf75bf99d" (UID: "1bd9bfc9-506a-44c3-9f9c-728cf75bf99d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:12:12 crc kubenswrapper[5119]: E0220 00:12:12.803467 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.303401213 +0000 UTC m=+115.282365505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.816358 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1bd9bfc9-506a-44c3-9f9c-728cf75bf99d" (UID: "1bd9bfc9-506a-44c3-9f9c-728cf75bf99d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.838187 5119 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.905267 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.905895 5119 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:12 crc kubenswrapper[5119]: I0220 00:12:12.905914 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bd9bfc9-506a-44c3-9f9c-728cf75bf99d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:12 crc kubenswrapper[5119]: E0220 00:12:12.906022 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.405994641 +0000 UTC m=+115.384959153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-qh5n2" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.007250 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:13 crc kubenswrapper[5119]: E0220 00:12:13.007763 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:12:13.507738085 +0000 UTC m=+115.486702377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.016019 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.027484 5119 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-20T00:12:12.838219739Z","UUID":"e864a549-182d-4c63-9522-44865d5aa41b","Handler":null,"Name":"","Endpoint":""} Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.034281 5119 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.034325 5119 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 20 00:12:13 crc kubenswrapper[5119]: W0220 00:12:13.036008 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6c050f1b_62ba_4f8f_9eab_696aaf960f00.slice/crio-aec2aeb96da008948431455094b4706c2c099846e7b1a8d68b51c82708dba092 WatchSource:0}: Error finding container aec2aeb96da008948431455094b4706c2c099846e7b1a8d68b51c82708dba092: Status 404 returned error can't find the container with id aec2aeb96da008948431455094b4706c2c099846e7b1a8d68b51c82708dba092 Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.109487 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.115244 5119 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.115637 5119 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.153253 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-qh5n2\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.171212 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.211274 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.230651 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.329133 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4mff9" event={"ID":"965b8822-0f09-43d5-a864-775966130c7d","Type":"ContainerStarted","Data":"76d9e83a4a383ef61621ebd87300eb188d65372f6c319c29280ffba9c7d2c79f"} Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.329193 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4mff9" event={"ID":"965b8822-0f09-43d5-a864-775966130c7d","Type":"ContainerStarted","Data":"205dfda29fca4faf9d7f989696f7eb31cab78499c8d3d413eea5c61f12460763"} Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.338526 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"1bd9bfc9-506a-44c3-9f9c-728cf75bf99d","Type":"ContainerDied","Data":"12c376ec874c22b1ae2798dca3a1203eac742ba65a28d3ac346c240509344f78"} Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.338600 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12c376ec874c22b1ae2798dca3a1203eac742ba65a28d3ac346c240509344f78" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.338708 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.343712 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"6c050f1b-62ba-4f8f-9eab-696aaf960f00","Type":"ContainerStarted","Data":"aec2aeb96da008948431455094b4706c2c099846e7b1a8d68b51c82708dba092"} Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.534466 5119 patch_prober.go:28] interesting pod/downloads-747b44746d-8mw6h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.535045 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-8mw6h" podUID="8492563b-3d36-4b7a-a88b-ffd27fd60eb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.589576 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qh5n2"] Feb 20 00:12:13 crc kubenswrapper[5119]: W0220 00:12:13.623349 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ac5e92f_06f1_4557_8ab1_0a48d313b01c.slice/crio-3ccf782f2d3341ec45ad1358bec62903bef370ee33a60a08b53fc1d9bddeffbb WatchSource:0}: Error finding container 3ccf782f2d3341ec45ad1358bec62903bef370ee33a60a08b53fc1d9bddeffbb: Status 404 returned error can't find the container with id 3ccf782f2d3341ec45ad1358bec62903bef370ee33a60a08b53fc1d9bddeffbb Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.624564 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.624602 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.624658 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.624705 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.647162 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.647193 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.647888 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.647952 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.831186 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.840735 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00a91a87-0ad1-4805-a686-42ea9dfa6bb9-metrics-certs\") pod \"network-metrics-daemon-vnzx8\" (UID: \"00a91a87-0ad1-4805-a686-42ea9dfa6bb9\") " pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.876817 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.891617 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vnzx8" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.923759 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:12:13 crc kubenswrapper[5119]: I0220 00:12:13.948411 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:12:14 crc kubenswrapper[5119]: I0220 00:12:14.193719 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vnzx8"] Feb 20 00:12:14 crc kubenswrapper[5119]: I0220 00:12:14.355460 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"e47fe8871e84237aba109f57c25f4e2e81adb5f231566d952fbec21176fc78a6"} Feb 20 00:12:14 crc kubenswrapper[5119]: I0220 00:12:14.358504 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4mff9" event={"ID":"965b8822-0f09-43d5-a864-775966130c7d","Type":"ContainerStarted","Data":"8cdf832a90158b1fcabc90dc2d0a5d12c35af41769a8e0aab3e8ae9a70193954"} Feb 20 00:12:14 crc kubenswrapper[5119]: I0220 00:12:14.359872 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vnzx8" event={"ID":"00a91a87-0ad1-4805-a686-42ea9dfa6bb9","Type":"ContainerStarted","Data":"a7574b01d6e5e65e3ff9a4c410df5f6c1998f621e63addd11da27b731495841b"} Feb 20 00:12:14 crc kubenswrapper[5119]: I0220 00:12:14.361313 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" event={"ID":"6ac5e92f-06f1-4557-8ab1-0a48d313b01c","Type":"ContainerStarted","Data":"3ccf782f2d3341ec45ad1358bec62903bef370ee33a60a08b53fc1d9bddeffbb"} Feb 20 00:12:14 crc kubenswrapper[5119]: I0220 00:12:14.364836 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"6c050f1b-62ba-4f8f-9eab-696aaf960f00","Type":"ContainerStarted","Data":"2135a00d0c0f74fa5587a0e330a6e5939299494fb663461f6d188fa919f52f92"} Feb 20 00:12:14 crc kubenswrapper[5119]: W0220 00:12:14.501297 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-82d24e5a13e4473783466efab1053fa0fc21518d4d28bf1771bd5a74c9863033 WatchSource:0}: Error finding container 82d24e5a13e4473783466efab1053fa0fc21518d4d28bf1771bd5a74c9863033: Status 404 returned error can't find the container with id 82d24e5a13e4473783466efab1053fa0fc21518d4d28bf1771bd5a74c9863033 Feb 20 00:12:14 crc kubenswrapper[5119]: I0220 00:12:14.873025 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Feb 20 00:12:15 crc kubenswrapper[5119]: I0220 00:12:15.239537 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8dmqk" Feb 20 00:12:15 crc kubenswrapper[5119]: I0220 00:12:15.372731 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"b8e6133d8328ccf7dfe36217a6b8a12006351ca25248cc0748eb795ce10277fa"} Feb 20 00:12:15 crc kubenswrapper[5119]: I0220 00:12:15.375348 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" event={"ID":"6ac5e92f-06f1-4557-8ab1-0a48d313b01c","Type":"ContainerStarted","Data":"7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5"} Feb 20 00:12:15 crc kubenswrapper[5119]: I0220 00:12:15.376810 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:15 crc kubenswrapper[5119]: I0220 00:12:15.378559 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"82d24e5a13e4473783466efab1053fa0fc21518d4d28bf1771bd5a74c9863033"} Feb 20 00:12:15 crc kubenswrapper[5119]: I0220 00:12:15.397587 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" podStartSLOduration=96.397564525 podStartE2EDuration="1m36.397564525s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:15.394236416 +0000 UTC m=+117.373200708" watchObservedRunningTime="2026-02-20 00:12:15.397564525 +0000 UTC m=+117.376528827" Feb 20 00:12:15 crc kubenswrapper[5119]: I0220 00:12:15.471876 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4mff9" podStartSLOduration=19.471856912 podStartE2EDuration="19.471856912s" podCreationTimestamp="2026-02-20 00:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:15.433728437 +0000 UTC m=+117.412692749" watchObservedRunningTime="2026-02-20 00:12:15.471856912 +0000 UTC m=+117.450821204" Feb 20 00:12:15 crc kubenswrapper[5119]: I0220 00:12:15.472267 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=3.472262512 podStartE2EDuration="3.472262512s" podCreationTimestamp="2026-02-20 00:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:15.469257642 +0000 UTC m=+117.448221964" watchObservedRunningTime="2026-02-20 00:12:15.472262512 +0000 UTC m=+117.451226804" Feb 20 00:12:15 crc kubenswrapper[5119]: E0220 00:12:15.902212 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:12:15 crc kubenswrapper[5119]: E0220 00:12:15.904764 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:12:15 crc kubenswrapper[5119]: E0220 00:12:15.906257 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:12:15 crc kubenswrapper[5119]: E0220 00:12:15.906302 5119 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" podUID="bcdbee0b-45bb-462c-aac5-ccb96d5b814b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 20 00:12:16 crc kubenswrapper[5119]: I0220 00:12:16.398054 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vnzx8" event={"ID":"00a91a87-0ad1-4805-a686-42ea9dfa6bb9","Type":"ContainerStarted","Data":"983bd80bbebb91723c9eb23029f950d1728579218d7c5fadfc7f4eba142b8055"} Feb 20 00:12:16 crc kubenswrapper[5119]: I0220 00:12:16.403131 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"df0b3d2e6a0a02848f3031350fef19f67d00fa95f073c350ba038c0a50712e37"} Feb 20 00:12:16 crc kubenswrapper[5119]: I0220 00:12:16.406072 5119 generic.go:358] "Generic (PLEG): container finished" podID="6c050f1b-62ba-4f8f-9eab-696aaf960f00" containerID="2135a00d0c0f74fa5587a0e330a6e5939299494fb663461f6d188fa919f52f92" exitCode=0 Feb 20 00:12:16 crc kubenswrapper[5119]: I0220 00:12:16.406133 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"6c050f1b-62ba-4f8f-9eab-696aaf960f00","Type":"ContainerDied","Data":"2135a00d0c0f74fa5587a0e330a6e5939299494fb663461f6d188fa919f52f92"} Feb 20 00:12:16 crc kubenswrapper[5119]: I0220 00:12:16.410250 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"c280c5a7e666d8ce069436d2e16efe329cb8242b1085579e8b857383f06cb063"} Feb 20 00:12:16 crc kubenswrapper[5119]: I0220 00:12:16.416275 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"31b8c1eb32ffa5bd86a04fadb7956e194ae11e205ea76efefd597fded265d51e"} Feb 20 00:12:17 crc kubenswrapper[5119]: I0220 00:12:17.422591 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:12:17 crc kubenswrapper[5119]: I0220 00:12:17.857623 5119 scope.go:117] "RemoveContainer" containerID="f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf" Feb 20 00:12:18 crc kubenswrapper[5119]: I0220 00:12:18.163529 5119 patch_prober.go:28] interesting pod/downloads-747b44746d-8mw6h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 20 00:12:18 crc kubenswrapper[5119]: I0220 00:12:18.163626 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-8mw6h" podUID="8492563b-3d36-4b7a-a88b-ffd27fd60eb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.42:8080/\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 20 00:12:18 crc kubenswrapper[5119]: I0220 00:12:18.500138 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:12:18 crc kubenswrapper[5119]: I0220 00:12:18.643826 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kube-api-access\") pod \"6c050f1b-62ba-4f8f-9eab-696aaf960f00\" (UID: \"6c050f1b-62ba-4f8f-9eab-696aaf960f00\") " Feb 20 00:12:18 crc kubenswrapper[5119]: I0220 00:12:18.644013 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kubelet-dir\") pod \"6c050f1b-62ba-4f8f-9eab-696aaf960f00\" (UID: \"6c050f1b-62ba-4f8f-9eab-696aaf960f00\") " Feb 20 00:12:18 crc kubenswrapper[5119]: I0220 00:12:18.644500 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6c050f1b-62ba-4f8f-9eab-696aaf960f00" (UID: "6c050f1b-62ba-4f8f-9eab-696aaf960f00"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:12:18 crc kubenswrapper[5119]: I0220 00:12:18.660754 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6c050f1b-62ba-4f8f-9eab-696aaf960f00" (UID: "6c050f1b-62ba-4f8f-9eab-696aaf960f00"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:18 crc kubenswrapper[5119]: I0220 00:12:18.745677 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:18 crc kubenswrapper[5119]: I0220 00:12:18.746259 5119 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c050f1b-62ba-4f8f-9eab-696aaf960f00-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:19 crc kubenswrapper[5119]: I0220 00:12:19.440662 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"6c050f1b-62ba-4f8f-9eab-696aaf960f00","Type":"ContainerDied","Data":"aec2aeb96da008948431455094b4706c2c099846e7b1a8d68b51c82708dba092"} Feb 20 00:12:19 crc kubenswrapper[5119]: I0220 00:12:19.441291 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aec2aeb96da008948431455094b4706c2c099846e7b1a8d68b51c82708dba092" Feb 20 00:12:19 crc kubenswrapper[5119]: I0220 00:12:19.441479 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:12:19 crc kubenswrapper[5119]: E0220 00:12:19.530909 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod6c050f1b_62ba_4f8f_9eab_696aaf960f00.slice\": RecentStats: unable to find data in memory cache]" Feb 20 00:12:19 crc kubenswrapper[5119]: I0220 00:12:19.601667 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:12:19 crc kubenswrapper[5119]: I0220 00:12:19.609987 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-vz6mj" Feb 20 00:12:21 crc kubenswrapper[5119]: I0220 00:12:21.206391 5119 ???:1] "http: TLS handshake error from 192.168.126.11:34070: no serving certificate available for the kubelet" Feb 20 00:12:22 crc kubenswrapper[5119]: I0220 00:12:22.478925 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:12:25 crc kubenswrapper[5119]: E0220 00:12:25.904906 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:12:25 crc kubenswrapper[5119]: E0220 00:12:25.908422 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:12:25 crc kubenswrapper[5119]: E0220 00:12:25.910389 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:12:25 crc kubenswrapper[5119]: E0220 00:12:25.910442 5119 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" podUID="bcdbee0b-45bb-462c-aac5-ccb96d5b814b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.503722 5119 generic.go:358] "Generic (PLEG): container finished" podID="8c54e6d8-4a07-479d-aec6-81085d348561" containerID="cea31e67b5e73b3d058921a72b1c9447f1bbb2d169a0712dc685f9b389227327" exitCode=0 Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.503857 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4krw4" event={"ID":"8c54e6d8-4a07-479d-aec6-81085d348561","Type":"ContainerDied","Data":"cea31e67b5e73b3d058921a72b1c9447f1bbb2d169a0712dc685f9b389227327"} Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.507483 5119 generic.go:358] "Generic (PLEG): container finished" podID="fdd71310-68b1-4580-8ea5-053669823d3c" containerID="4f4608e6294a1fb93ff27942c3d23da7aac5fd9a4c2286759f53825aa1fde169" exitCode=0 Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.507589 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr894" event={"ID":"fdd71310-68b1-4580-8ea5-053669823d3c","Type":"ContainerDied","Data":"4f4608e6294a1fb93ff27942c3d23da7aac5fd9a4c2286759f53825aa1fde169"} Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.510059 5119 generic.go:358] "Generic (PLEG): container finished" podID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" containerID="ec8d0370e606af964039c90c9a4cb0455a10ec9f26109c8f9839d196b2da911f" exitCode=0 Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.510159 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2qsm" event={"ID":"d1da415e-215f-4b73-b5b5-36a8c7e68fda","Type":"ContainerDied","Data":"ec8d0370e606af964039c90c9a4cb0455a10ec9f26109c8f9839d196b2da911f"} Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.513780 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vnzx8" event={"ID":"00a91a87-0ad1-4805-a686-42ea9dfa6bb9","Type":"ContainerStarted","Data":"11b93178133bf81f0ac2ef59e588a191a277eb69fbb824464f8c5e5804e2877d"} Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.516827 5119 generic.go:358] "Generic (PLEG): container finished" podID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" containerID="3a046e23e58d6d71bb493c445b2055b9dfa569bc186cfa52db7466e528bb705b" exitCode=0 Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.516947 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ll66" event={"ID":"7a6456d4-f2fc-4c32-82bf-9c58cfa87699","Type":"ContainerDied","Data":"3a046e23e58d6d71bb493c445b2055b9dfa569bc186cfa52db7466e528bb705b"} Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.520908 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.522817 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c"} Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.524084 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.526093 5119 generic.go:358] "Generic (PLEG): container finished" podID="d30fdb9a-8c75-4713-be06-be83ca5aa897" containerID="f13e41a6949929d29e57d3be7e90ff52591d571c8a1299957f7e465751d3cc48" exitCode=0 Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.526208 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs997" event={"ID":"d30fdb9a-8c75-4713-be06-be83ca5aa897","Type":"ContainerDied","Data":"f13e41a6949929d29e57d3be7e90ff52591d571c8a1299957f7e465751d3cc48"} Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.534753 5119 generic.go:358] "Generic (PLEG): container finished" podID="26fedbc9-e967-46be-9bd2-00822aa128a5" containerID="bd1887dc4f63c79c3ac660eaeb0a5968c299febe679557707ea144ee7f764415" exitCode=0 Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.534885 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zk67r" event={"ID":"26fedbc9-e967-46be-9bd2-00822aa128a5","Type":"ContainerDied","Data":"bd1887dc4f63c79c3ac660eaeb0a5968c299febe679557707ea144ee7f764415"} Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.539681 5119 generic.go:358] "Generic (PLEG): container finished" podID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" containerID="02aed272c44710ef57f5af5081fc3a62692b94c5da44d1fa21503959ce5be9e2" exitCode=0 Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.539816 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rprkl" event={"ID":"14397656-dc1d-4bf5-a1f2-e9b79fab3e53","Type":"ContainerDied","Data":"02aed272c44710ef57f5af5081fc3a62692b94c5da44d1fa21503959ce5be9e2"} Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.541692 5119 generic.go:358] "Generic (PLEG): container finished" podID="651bc74f-974d-4562-b93a-016f18443fb6" containerID="8b84f4586ba4dcf64537d9657ae345110c002009c47b7e0b2a45970d31ee914c" exitCode=0 Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.541800 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rg995" event={"ID":"651bc74f-974d-4562-b93a-016f18443fb6","Type":"ContainerDied","Data":"8b84f4586ba4dcf64537d9657ae345110c002009c47b7e0b2a45970d31ee914c"} Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.643060 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=46.643032187 podStartE2EDuration="46.643032187s" podCreationTimestamp="2026-02-20 00:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:27.630044578 +0000 UTC m=+129.609008870" watchObservedRunningTime="2026-02-20 00:12:27.643032187 +0000 UTC m=+129.621996489" Feb 20 00:12:27 crc kubenswrapper[5119]: I0220 00:12:27.700277 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vnzx8" podStartSLOduration=108.700250915 podStartE2EDuration="1m48.700250915s" podCreationTimestamp="2026-02-20 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:27.696952596 +0000 UTC m=+129.675916918" watchObservedRunningTime="2026-02-20 00:12:27.700250915 +0000 UTC m=+129.679215207" Feb 20 00:12:28 crc kubenswrapper[5119]: I0220 00:12:28.181009 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-8mw6h" Feb 20 00:12:28 crc kubenswrapper[5119]: I0220 00:12:28.554202 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs997" event={"ID":"d30fdb9a-8c75-4713-be06-be83ca5aa897","Type":"ContainerStarted","Data":"bcfe362ad72212552e41c81e4e5d08a526141afc16d583b30ef03a8c45034107"} Feb 20 00:12:29 crc kubenswrapper[5119]: I0220 00:12:29.098493 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hs997" podStartSLOduration=4.921129455 podStartE2EDuration="23.098469013s" podCreationTimestamp="2026-02-20 00:12:06 +0000 UTC" firstStartedPulling="2026-02-20 00:12:08.123512136 +0000 UTC m=+110.102476428" lastFinishedPulling="2026-02-20 00:12:26.300851694 +0000 UTC m=+128.279815986" observedRunningTime="2026-02-20 00:12:29.096512221 +0000 UTC m=+131.075476523" watchObservedRunningTime="2026-02-20 00:12:29.098469013 +0000 UTC m=+131.077433305" Feb 20 00:12:29 crc kubenswrapper[5119]: I0220 00:12:29.588137 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rg995" event={"ID":"651bc74f-974d-4562-b93a-016f18443fb6","Type":"ContainerStarted","Data":"7a00e01398406ad76da45c9db405de5d9f2a74810060989fe3ac4fcdc0c70150"} Feb 20 00:12:29 crc kubenswrapper[5119]: I0220 00:12:29.592850 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4krw4" event={"ID":"8c54e6d8-4a07-479d-aec6-81085d348561","Type":"ContainerStarted","Data":"db2692c7589a999469752db81ab30bee5fd4d81617157805b7e103434bf9eb5e"} Feb 20 00:12:29 crc kubenswrapper[5119]: I0220 00:12:29.604875 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr894" event={"ID":"fdd71310-68b1-4580-8ea5-053669823d3c","Type":"ContainerStarted","Data":"20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8"} Feb 20 00:12:29 crc kubenswrapper[5119]: E0220 00:12:29.714903 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod6c050f1b_62ba_4f8f_9eab_696aaf960f00.slice\": RecentStats: unable to find data in memory cache]" Feb 20 00:12:29 crc kubenswrapper[5119]: I0220 00:12:29.825951 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gr894" podStartSLOduration=6.699836291 podStartE2EDuration="23.825928905s" podCreationTimestamp="2026-02-20 00:12:06 +0000 UTC" firstStartedPulling="2026-02-20 00:12:09.232009839 +0000 UTC m=+111.210974131" lastFinishedPulling="2026-02-20 00:12:26.358102453 +0000 UTC m=+128.337066745" observedRunningTime="2026-02-20 00:12:29.820210861 +0000 UTC m=+131.799175163" watchObservedRunningTime="2026-02-20 00:12:29.825928905 +0000 UTC m=+131.804893217" Feb 20 00:12:29 crc kubenswrapper[5119]: I0220 00:12:29.905694 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rg995" podStartSLOduration=5.855646038 podStartE2EDuration="22.905667858s" podCreationTimestamp="2026-02-20 00:12:07 +0000 UTC" firstStartedPulling="2026-02-20 00:12:09.236833988 +0000 UTC m=+111.215798280" lastFinishedPulling="2026-02-20 00:12:26.286855798 +0000 UTC m=+128.265820100" observedRunningTime="2026-02-20 00:12:29.905069351 +0000 UTC m=+131.884033713" watchObservedRunningTime="2026-02-20 00:12:29.905667858 +0000 UTC m=+131.884632150" Feb 20 00:12:30 crc kubenswrapper[5119]: I0220 00:12:30.615701 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rprkl" event={"ID":"14397656-dc1d-4bf5-a1f2-e9b79fab3e53","Type":"ContainerStarted","Data":"99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9"} Feb 20 00:12:30 crc kubenswrapper[5119]: I0220 00:12:30.618605 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2qsm" event={"ID":"d1da415e-215f-4b73-b5b5-36a8c7e68fda","Type":"ContainerStarted","Data":"63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af"} Feb 20 00:12:30 crc kubenswrapper[5119]: I0220 00:12:30.621792 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ll66" event={"ID":"7a6456d4-f2fc-4c32-82bf-9c58cfa87699","Type":"ContainerStarted","Data":"07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce"} Feb 20 00:12:30 crc kubenswrapper[5119]: I0220 00:12:30.625037 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zk67r" event={"ID":"26fedbc9-e967-46be-9bd2-00822aa128a5","Type":"ContainerStarted","Data":"6ac7f7b9c00dc82eab3e9c77207268a7cc473025f63fc8099e0d1198ee74a390"} Feb 20 00:12:30 crc kubenswrapper[5119]: I0220 00:12:30.644082 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rprkl" podStartSLOduration=6.486876445 podStartE2EDuration="25.644047428s" podCreationTimestamp="2026-02-20 00:12:05 +0000 UTC" firstStartedPulling="2026-02-20 00:12:07.103843571 +0000 UTC m=+109.082807863" lastFinishedPulling="2026-02-20 00:12:26.261014554 +0000 UTC m=+128.239978846" observedRunningTime="2026-02-20 00:12:30.636190164 +0000 UTC m=+132.615154536" watchObservedRunningTime="2026-02-20 00:12:30.644047428 +0000 UTC m=+132.623011760" Feb 20 00:12:30 crc kubenswrapper[5119]: I0220 00:12:30.662434 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z2qsm" podStartSLOduration=7.23880765 podStartE2EDuration="27.66241206s" podCreationTimestamp="2026-02-20 00:12:03 +0000 UTC" firstStartedPulling="2026-02-20 00:12:05.858626764 +0000 UTC m=+107.837591056" lastFinishedPulling="2026-02-20 00:12:26.282231174 +0000 UTC m=+128.261195466" observedRunningTime="2026-02-20 00:12:30.656005105 +0000 UTC m=+132.634969387" watchObservedRunningTime="2026-02-20 00:12:30.66241206 +0000 UTC m=+132.641376362" Feb 20 00:12:30 crc kubenswrapper[5119]: I0220 00:12:30.682122 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8ll66" podStartSLOduration=7.267343317 podStartE2EDuration="27.682096558s" podCreationTimestamp="2026-02-20 00:12:03 +0000 UTC" firstStartedPulling="2026-02-20 00:12:05.873309649 +0000 UTC m=+107.852273941" lastFinishedPulling="2026-02-20 00:12:26.28806287 +0000 UTC m=+128.267027182" observedRunningTime="2026-02-20 00:12:30.678307565 +0000 UTC m=+132.657271867" watchObservedRunningTime="2026-02-20 00:12:30.682096558 +0000 UTC m=+132.661060850" Feb 20 00:12:30 crc kubenswrapper[5119]: I0220 00:12:30.707370 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zk67r" podStartSLOduration=7.506907523 podStartE2EDuration="26.707351618s" podCreationTimestamp="2026-02-20 00:12:04 +0000 UTC" firstStartedPulling="2026-02-20 00:12:07.086436853 +0000 UTC m=+109.065401145" lastFinishedPulling="2026-02-20 00:12:26.286880948 +0000 UTC m=+128.265845240" observedRunningTime="2026-02-20 00:12:30.706424333 +0000 UTC m=+132.685388635" watchObservedRunningTime="2026-02-20 00:12:30.707351618 +0000 UTC m=+132.686315910" Feb 20 00:12:30 crc kubenswrapper[5119]: I0220 00:12:30.738086 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4krw4" podStartSLOduration=7.388544246 podStartE2EDuration="26.738058118s" podCreationTimestamp="2026-02-20 00:12:04 +0000 UTC" firstStartedPulling="2026-02-20 00:12:06.969163941 +0000 UTC m=+108.948128233" lastFinishedPulling="2026-02-20 00:12:26.318677803 +0000 UTC m=+128.297642105" observedRunningTime="2026-02-20 00:12:30.734728417 +0000 UTC m=+132.713692729" watchObservedRunningTime="2026-02-20 00:12:30.738058118 +0000 UTC m=+132.717022420" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.141487 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.142135 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.295443 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.415387 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.415489 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.466415 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.653966 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.654314 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.692372 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.702288 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.731387 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.731743 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.768748 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:34 crc kubenswrapper[5119]: I0220 00:12:34.880900 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5vrqr" Feb 20 00:12:35 crc kubenswrapper[5119]: I0220 00:12:35.694823 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:35 crc kubenswrapper[5119]: I0220 00:12:35.696087 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:35 crc kubenswrapper[5119]: E0220 00:12:35.900034 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:12:35 crc kubenswrapper[5119]: E0220 00:12:35.905590 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:12:35 crc kubenswrapper[5119]: E0220 00:12:35.907015 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:12:35 crc kubenswrapper[5119]: E0220 00:12:35.907048 5119 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" podUID="bcdbee0b-45bb-462c-aac5-ccb96d5b814b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 20 00:12:36 crc kubenswrapper[5119]: I0220 00:12:36.346068 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:36 crc kubenswrapper[5119]: I0220 00:12:36.347208 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:36 crc kubenswrapper[5119]: I0220 00:12:36.390761 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:36 crc kubenswrapper[5119]: I0220 00:12:36.668781 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:36 crc kubenswrapper[5119]: I0220 00:12:36.668851 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:36 crc kubenswrapper[5119]: I0220 00:12:36.713062 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:36 crc kubenswrapper[5119]: I0220 00:12:36.725435 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:12:37 crc kubenswrapper[5119]: I0220 00:12:37.312683 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:37 crc kubenswrapper[5119]: I0220 00:12:37.312751 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:37 crc kubenswrapper[5119]: I0220 00:12:37.358132 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:37 crc kubenswrapper[5119]: I0220 00:12:37.427723 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:12:37 crc kubenswrapper[5119]: I0220 00:12:37.723493 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:12:37 crc kubenswrapper[5119]: I0220 00:12:37.725205 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:37 crc kubenswrapper[5119]: I0220 00:12:37.745675 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4krw4"] Feb 20 00:12:37 crc kubenswrapper[5119]: I0220 00:12:37.752748 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:37 crc kubenswrapper[5119]: I0220 00:12:37.752821 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:37 crc kubenswrapper[5119]: I0220 00:12:37.803347 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:37 crc kubenswrapper[5119]: I0220 00:12:37.944376 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zk67r"] Feb 20 00:12:38 crc kubenswrapper[5119]: I0220 00:12:38.689902 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7c2z6_bcdbee0b-45bb-462c-aac5-ccb96d5b814b/kube-multus-additional-cni-plugins/0.log" Feb 20 00:12:38 crc kubenswrapper[5119]: I0220 00:12:38.690280 5119 generic.go:358] "Generic (PLEG): container finished" podID="bcdbee0b-45bb-462c-aac5-ccb96d5b814b" containerID="d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" exitCode=137 Feb 20 00:12:38 crc kubenswrapper[5119]: I0220 00:12:38.690748 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zk67r" podUID="26fedbc9-e967-46be-9bd2-00822aa128a5" containerName="registry-server" containerID="cri-o://6ac7f7b9c00dc82eab3e9c77207268a7cc473025f63fc8099e0d1198ee74a390" gracePeriod=2 Feb 20 00:12:38 crc kubenswrapper[5119]: I0220 00:12:38.690845 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" event={"ID":"bcdbee0b-45bb-462c-aac5-ccb96d5b814b","Type":"ContainerDied","Data":"d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504"} Feb 20 00:12:38 crc kubenswrapper[5119]: I0220 00:12:38.692507 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4krw4" podUID="8c54e6d8-4a07-479d-aec6-81085d348561" containerName="registry-server" containerID="cri-o://db2692c7589a999469752db81ab30bee5fd4d81617157805b7e103434bf9eb5e" gracePeriod=2 Feb 20 00:12:38 crc kubenswrapper[5119]: I0220 00:12:38.735074 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.084077 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.697892 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7c2z6_bcdbee0b-45bb-462c-aac5-ccb96d5b814b/kube-multus-additional-cni-plugins/0.log" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.698044 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" event={"ID":"bcdbee0b-45bb-462c-aac5-ccb96d5b814b","Type":"ContainerDied","Data":"da295c3421409e96538fb7a400abcf52373238955c25be6cedda39c129afb1ab"} Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.698107 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da295c3421409e96538fb7a400abcf52373238955c25be6cedda39c129afb1ab" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.728823 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7c2z6_bcdbee0b-45bb-462c-aac5-ccb96d5b814b/kube-multus-additional-cni-plugins/0.log" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.729236 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.835256 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-tuning-conf-dir\") pod \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.835446 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-cni-sysctl-allowlist\") pod \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.835493 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-ready\") pod \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.835531 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49v5x\" (UniqueName: \"kubernetes.io/projected/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-kube-api-access-49v5x\") pod \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\" (UID: \"bcdbee0b-45bb-462c-aac5-ccb96d5b814b\") " Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.836371 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "bcdbee0b-45bb-462c-aac5-ccb96d5b814b" (UID: "bcdbee0b-45bb-462c-aac5-ccb96d5b814b"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.837203 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "bcdbee0b-45bb-462c-aac5-ccb96d5b814b" (UID: "bcdbee0b-45bb-462c-aac5-ccb96d5b814b"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.837491 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-ready" (OuterVolumeSpecName: "ready") pod "bcdbee0b-45bb-462c-aac5-ccb96d5b814b" (UID: "bcdbee0b-45bb-462c-aac5-ccb96d5b814b"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.851256 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-kube-api-access-49v5x" (OuterVolumeSpecName: "kube-api-access-49v5x") pod "bcdbee0b-45bb-462c-aac5-ccb96d5b814b" (UID: "bcdbee0b-45bb-462c-aac5-ccb96d5b814b"). InnerVolumeSpecName "kube-api-access-49v5x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:39 crc kubenswrapper[5119]: E0220 00:12:39.869615 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod6c050f1b_62ba_4f8f_9eab_696aaf960f00.slice\": RecentStats: unable to find data in memory cache]" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.937730 5119 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.937775 5119 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.937826 5119 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-ready\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:39 crc kubenswrapper[5119]: I0220 00:12:39.937836 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-49v5x\" (UniqueName: \"kubernetes.io/projected/bcdbee0b-45bb-462c-aac5-ccb96d5b814b-kube-api-access-49v5x\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.145554 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hs997"] Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.146009 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hs997" podUID="d30fdb9a-8c75-4713-be06-be83ca5aa897" containerName="registry-server" containerID="cri-o://bcfe362ad72212552e41c81e4e5d08a526141afc16d583b30ef03a8c45034107" gracePeriod=2 Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.343889 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rg995"] Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.705620 5119 generic.go:358] "Generic (PLEG): container finished" podID="d30fdb9a-8c75-4713-be06-be83ca5aa897" containerID="bcfe362ad72212552e41c81e4e5d08a526141afc16d583b30ef03a8c45034107" exitCode=0 Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.705742 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs997" event={"ID":"d30fdb9a-8c75-4713-be06-be83ca5aa897","Type":"ContainerDied","Data":"bcfe362ad72212552e41c81e4e5d08a526141afc16d583b30ef03a8c45034107"} Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.708655 5119 generic.go:358] "Generic (PLEG): container finished" podID="26fedbc9-e967-46be-9bd2-00822aa128a5" containerID="6ac7f7b9c00dc82eab3e9c77207268a7cc473025f63fc8099e0d1198ee74a390" exitCode=0 Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.708841 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zk67r" event={"ID":"26fedbc9-e967-46be-9bd2-00822aa128a5","Type":"ContainerDied","Data":"6ac7f7b9c00dc82eab3e9c77207268a7cc473025f63fc8099e0d1198ee74a390"} Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.708900 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zk67r" event={"ID":"26fedbc9-e967-46be-9bd2-00822aa128a5","Type":"ContainerDied","Data":"1cd8b24504d4cb2c60132bef0bb1778e86d1bb4d52bf7528c5d28ad6533c10be"} Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.708939 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cd8b24504d4cb2c60132bef0bb1778e86d1bb4d52bf7528c5d28ad6533c10be" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.711395 5119 generic.go:358] "Generic (PLEG): container finished" podID="8c54e6d8-4a07-479d-aec6-81085d348561" containerID="db2692c7589a999469752db81ab30bee5fd4d81617157805b7e103434bf9eb5e" exitCode=0 Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.711692 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7c2z6" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.714036 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4krw4" event={"ID":"8c54e6d8-4a07-479d-aec6-81085d348561","Type":"ContainerDied","Data":"db2692c7589a999469752db81ab30bee5fd4d81617157805b7e103434bf9eb5e"} Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.714420 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rg995" podUID="651bc74f-974d-4562-b93a-016f18443fb6" containerName="registry-server" containerID="cri-o://7a00e01398406ad76da45c9db405de5d9f2a74810060989fe3ac4fcdc0c70150" gracePeriod=2 Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.716288 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.790717 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.792348 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7c2z6"] Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.793300 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-catalog-content\") pod \"26fedbc9-e967-46be-9bd2-00822aa128a5\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.793470 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7r6z\" (UniqueName: \"kubernetes.io/projected/26fedbc9-e967-46be-9bd2-00822aa128a5-kube-api-access-n7r6z\") pod \"26fedbc9-e967-46be-9bd2-00822aa128a5\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.793609 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-utilities\") pod \"26fedbc9-e967-46be-9bd2-00822aa128a5\" (UID: \"26fedbc9-e967-46be-9bd2-00822aa128a5\") " Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.793893 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7c2z6"] Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.796010 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-utilities" (OuterVolumeSpecName: "utilities") pod "26fedbc9-e967-46be-9bd2-00822aa128a5" (UID: "26fedbc9-e967-46be-9bd2-00822aa128a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.818177 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26fedbc9-e967-46be-9bd2-00822aa128a5-kube-api-access-n7r6z" (OuterVolumeSpecName: "kube-api-access-n7r6z") pod "26fedbc9-e967-46be-9bd2-00822aa128a5" (UID: "26fedbc9-e967-46be-9bd2-00822aa128a5"). InnerVolumeSpecName "kube-api-access-n7r6z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.839691 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26fedbc9-e967-46be-9bd2-00822aa128a5" (UID: "26fedbc9-e967-46be-9bd2-00822aa128a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.864175 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcdbee0b-45bb-462c-aac5-ccb96d5b814b" path="/var/lib/kubelet/pods/bcdbee0b-45bb-462c-aac5-ccb96d5b814b/volumes" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.895854 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-catalog-content\") pod \"8c54e6d8-4a07-479d-aec6-81085d348561\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.895907 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rhj6\" (UniqueName: \"kubernetes.io/projected/8c54e6d8-4a07-479d-aec6-81085d348561-kube-api-access-6rhj6\") pod \"8c54e6d8-4a07-479d-aec6-81085d348561\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.895971 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-utilities\") pod \"8c54e6d8-4a07-479d-aec6-81085d348561\" (UID: \"8c54e6d8-4a07-479d-aec6-81085d348561\") " Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.896248 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n7r6z\" (UniqueName: \"kubernetes.io/projected/26fedbc9-e967-46be-9bd2-00822aa128a5-kube-api-access-n7r6z\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.896265 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.896280 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fedbc9-e967-46be-9bd2-00822aa128a5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.897339 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-utilities" (OuterVolumeSpecName: "utilities") pod "8c54e6d8-4a07-479d-aec6-81085d348561" (UID: "8c54e6d8-4a07-479d-aec6-81085d348561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.909059 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c54e6d8-4a07-479d-aec6-81085d348561-kube-api-access-6rhj6" (OuterVolumeSpecName: "kube-api-access-6rhj6") pod "8c54e6d8-4a07-479d-aec6-81085d348561" (UID: "8c54e6d8-4a07-479d-aec6-81085d348561"). InnerVolumeSpecName "kube-api-access-6rhj6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.951476 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c54e6d8-4a07-479d-aec6-81085d348561" (UID: "8c54e6d8-4a07-479d-aec6-81085d348561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.997783 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.997832 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rhj6\" (UniqueName: \"kubernetes.io/projected/8c54e6d8-4a07-479d-aec6-81085d348561-kube-api-access-6rhj6\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:40 crc kubenswrapper[5119]: I0220 00:12:40.997852 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c54e6d8-4a07-479d-aec6-81085d348561-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.561657 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.605924 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zqsf\" (UniqueName: \"kubernetes.io/projected/d30fdb9a-8c75-4713-be06-be83ca5aa897-kube-api-access-5zqsf\") pod \"d30fdb9a-8c75-4713-be06-be83ca5aa897\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.606012 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-catalog-content\") pod \"d30fdb9a-8c75-4713-be06-be83ca5aa897\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.606115 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-utilities\") pod \"d30fdb9a-8c75-4713-be06-be83ca5aa897\" (UID: \"d30fdb9a-8c75-4713-be06-be83ca5aa897\") " Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.607646 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-utilities" (OuterVolumeSpecName: "utilities") pod "d30fdb9a-8c75-4713-be06-be83ca5aa897" (UID: "d30fdb9a-8c75-4713-be06-be83ca5aa897"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.611824 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30fdb9a-8c75-4713-be06-be83ca5aa897-kube-api-access-5zqsf" (OuterVolumeSpecName: "kube-api-access-5zqsf") pod "d30fdb9a-8c75-4713-be06-be83ca5aa897" (UID: "d30fdb9a-8c75-4713-be06-be83ca5aa897"). InnerVolumeSpecName "kube-api-access-5zqsf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.627944 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d30fdb9a-8c75-4713-be06-be83ca5aa897" (UID: "d30fdb9a-8c75-4713-be06-be83ca5aa897"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.708788 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5zqsf\" (UniqueName: \"kubernetes.io/projected/d30fdb9a-8c75-4713-be06-be83ca5aa897-kube-api-access-5zqsf\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.708833 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.708850 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30fdb9a-8c75-4713-be06-be83ca5aa897-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.715109 5119 ???:1] "http: TLS handshake error from 192.168.126.11:44770: no serving certificate available for the kubelet" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.724573 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs997" event={"ID":"d30fdb9a-8c75-4713-be06-be83ca5aa897","Type":"ContainerDied","Data":"852f2fad1f5f946df2fddc9be03c76cfd4a7f550b8f8abd0d277cedbe3ec29ac"} Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.724688 5119 scope.go:117] "RemoveContainer" containerID="bcfe362ad72212552e41c81e4e5d08a526141afc16d583b30ef03a8c45034107" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.724612 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hs997" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.732969 5119 generic.go:358] "Generic (PLEG): container finished" podID="651bc74f-974d-4562-b93a-016f18443fb6" containerID="7a00e01398406ad76da45c9db405de5d9f2a74810060989fe3ac4fcdc0c70150" exitCode=0 Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.733069 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rg995" event={"ID":"651bc74f-974d-4562-b93a-016f18443fb6","Type":"ContainerDied","Data":"7a00e01398406ad76da45c9db405de5d9f2a74810060989fe3ac4fcdc0c70150"} Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.735169 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4krw4" event={"ID":"8c54e6d8-4a07-479d-aec6-81085d348561","Type":"ContainerDied","Data":"0d4cb6fda7910dec81e76cb4669e2e90e7dc5b63563f9a48dfa1f50a7f4518a4"} Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.735203 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4krw4" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.735631 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zk67r" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.775096 5119 scope.go:117] "RemoveContainer" containerID="f13e41a6949929d29e57d3be7e90ff52591d571c8a1299957f7e465751d3cc48" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.790571 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zk67r"] Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.792739 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zk67r"] Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.801880 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4krw4"] Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.805813 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4krw4"] Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.822114 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hs997"] Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.824801 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hs997"] Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.830956 5119 scope.go:117] "RemoveContainer" containerID="48f11e597c2ec98d0c0bafbd77745a6282a5efab77ab6f9933598ad724feae07" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.848120 5119 scope.go:117] "RemoveContainer" containerID="db2692c7589a999469752db81ab30bee5fd4d81617157805b7e103434bf9eb5e" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.864839 5119 scope.go:117] "RemoveContainer" containerID="cea31e67b5e73b3d058921a72b1c9447f1bbb2d169a0712dc685f9b389227327" Feb 20 00:12:41 crc kubenswrapper[5119]: I0220 00:12:41.880859 5119 scope.go:117] "RemoveContainer" containerID="f17d82722af616c6de0474b99193fe6da41673000c558716352d4270131bf90c" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.470062 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.521196 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-utilities\") pod \"651bc74f-974d-4562-b93a-016f18443fb6\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.521263 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qws5\" (UniqueName: \"kubernetes.io/projected/651bc74f-974d-4562-b93a-016f18443fb6-kube-api-access-2qws5\") pod \"651bc74f-974d-4562-b93a-016f18443fb6\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.521502 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-catalog-content\") pod \"651bc74f-974d-4562-b93a-016f18443fb6\" (UID: \"651bc74f-974d-4562-b93a-016f18443fb6\") " Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.522289 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-utilities" (OuterVolumeSpecName: "utilities") pod "651bc74f-974d-4562-b93a-016f18443fb6" (UID: "651bc74f-974d-4562-b93a-016f18443fb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.526951 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651bc74f-974d-4562-b93a-016f18443fb6-kube-api-access-2qws5" (OuterVolumeSpecName: "kube-api-access-2qws5") pod "651bc74f-974d-4562-b93a-016f18443fb6" (UID: "651bc74f-974d-4562-b93a-016f18443fb6"). InnerVolumeSpecName "kube-api-access-2qws5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.624122 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.624166 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2qws5\" (UniqueName: \"kubernetes.io/projected/651bc74f-974d-4562-b93a-016f18443fb6-kube-api-access-2qws5\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.643764 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "651bc74f-974d-4562-b93a-016f18443fb6" (UID: "651bc74f-974d-4562-b93a-016f18443fb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.725657 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651bc74f-974d-4562-b93a-016f18443fb6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.746637 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rg995" event={"ID":"651bc74f-974d-4562-b93a-016f18443fb6","Type":"ContainerDied","Data":"6a1904cd8a30066b17d2972c7f8b22d750d1e9f9f054166f3b6ee7b8a4045e84"} Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.746719 5119 scope.go:117] "RemoveContainer" containerID="7a00e01398406ad76da45c9db405de5d9f2a74810060989fe3ac4fcdc0c70150" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.746724 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rg995" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.767680 5119 scope.go:117] "RemoveContainer" containerID="8b84f4586ba4dcf64537d9657ae345110c002009c47b7e0b2a45970d31ee914c" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.780434 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rg995"] Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.786260 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rg995"] Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.808418 5119 scope.go:117] "RemoveContainer" containerID="e9fa4abb943c3b5b04fe0c6d3e09514c1617f35f9d94511341399e3e41e4cb91" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.865073 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26fedbc9-e967-46be-9bd2-00822aa128a5" path="/var/lib/kubelet/pods/26fedbc9-e967-46be-9bd2-00822aa128a5/volumes" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.865802 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651bc74f-974d-4562-b93a-016f18443fb6" path="/var/lib/kubelet/pods/651bc74f-974d-4562-b93a-016f18443fb6/volumes" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.866431 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c54e6d8-4a07-479d-aec6-81085d348561" path="/var/lib/kubelet/pods/8c54e6d8-4a07-479d-aec6-81085d348561/volumes" Feb 20 00:12:42 crc kubenswrapper[5119]: I0220 00:12:42.867680 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30fdb9a-8c75-4713-be06-be83ca5aa897" path="/var/lib/kubelet/pods/d30fdb9a-8c75-4713-be06-be83ca5aa897/volumes" Feb 20 00:12:44 crc kubenswrapper[5119]: I0220 00:12:44.710392 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:12:48 crc kubenswrapper[5119]: I0220 00:12:48.440536 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.828128 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829002 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c050f1b-62ba-4f8f-9eab-696aaf960f00" containerName="pruner" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829015 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c050f1b-62ba-4f8f-9eab-696aaf960f00" containerName="pruner" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829023 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c54e6d8-4a07-479d-aec6-81085d348561" containerName="extract-content" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829029 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c54e6d8-4a07-479d-aec6-81085d348561" containerName="extract-content" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829037 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d30fdb9a-8c75-4713-be06-be83ca5aa897" containerName="extract-content" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829043 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30fdb9a-8c75-4713-be06-be83ca5aa897" containerName="extract-content" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829053 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="26fedbc9-e967-46be-9bd2-00822aa128a5" containerName="extract-content" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829058 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fedbc9-e967-46be-9bd2-00822aa128a5" containerName="extract-content" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829069 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bcdbee0b-45bb-462c-aac5-ccb96d5b814b" containerName="kube-multus-additional-cni-plugins" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829077 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdbee0b-45bb-462c-aac5-ccb96d5b814b" containerName="kube-multus-additional-cni-plugins" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829086 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1bd9bfc9-506a-44c3-9f9c-728cf75bf99d" containerName="pruner" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829092 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd9bfc9-506a-44c3-9f9c-728cf75bf99d" containerName="pruner" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829109 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="651bc74f-974d-4562-b93a-016f18443fb6" containerName="extract-utilities" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829114 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="651bc74f-974d-4562-b93a-016f18443fb6" containerName="extract-utilities" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829122 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c54e6d8-4a07-479d-aec6-81085d348561" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829127 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c54e6d8-4a07-479d-aec6-81085d348561" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829133 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d30fdb9a-8c75-4713-be06-be83ca5aa897" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829138 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30fdb9a-8c75-4713-be06-be83ca5aa897" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829145 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="651bc74f-974d-4562-b93a-016f18443fb6" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829151 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="651bc74f-974d-4562-b93a-016f18443fb6" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829160 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="26fedbc9-e967-46be-9bd2-00822aa128a5" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829166 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fedbc9-e967-46be-9bd2-00822aa128a5" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829178 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="26fedbc9-e967-46be-9bd2-00822aa128a5" containerName="extract-utilities" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829185 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fedbc9-e967-46be-9bd2-00822aa128a5" containerName="extract-utilities" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829192 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d30fdb9a-8c75-4713-be06-be83ca5aa897" containerName="extract-utilities" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829199 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30fdb9a-8c75-4713-be06-be83ca5aa897" containerName="extract-utilities" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829209 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c54e6d8-4a07-479d-aec6-81085d348561" containerName="extract-utilities" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829214 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c54e6d8-4a07-479d-aec6-81085d348561" containerName="extract-utilities" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829221 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="651bc74f-974d-4562-b93a-016f18443fb6" containerName="extract-content" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829227 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="651bc74f-974d-4562-b93a-016f18443fb6" containerName="extract-content" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829310 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="bcdbee0b-45bb-462c-aac5-ccb96d5b814b" containerName="kube-multus-additional-cni-plugins" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829323 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="6c050f1b-62ba-4f8f-9eab-696aaf960f00" containerName="pruner" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829333 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="d30fdb9a-8c75-4713-be06-be83ca5aa897" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829340 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="1bd9bfc9-506a-44c3-9f9c-728cf75bf99d" containerName="pruner" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829348 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="651bc74f-974d-4562-b93a-016f18443fb6" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829356 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8c54e6d8-4a07-479d-aec6-81085d348561" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.829363 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="26fedbc9-e967-46be-9bd2-00822aa128a5" containerName="registry-server" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.852281 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.852428 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.855058 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.855568 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.933165 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"c86ef070-31a6-4f4a-abac-f81ddbed2d97\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:12:49 crc kubenswrapper[5119]: I0220 00:12:49.933237 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"c86ef070-31a6-4f4a-abac-f81ddbed2d97\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:12:49 crc kubenswrapper[5119]: E0220 00:12:49.994334 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod6c050f1b_62ba_4f8f_9eab_696aaf960f00.slice\": RecentStats: unable to find data in memory cache]" Feb 20 00:12:50 crc kubenswrapper[5119]: I0220 00:12:50.034360 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"c86ef070-31a6-4f4a-abac-f81ddbed2d97\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:12:50 crc kubenswrapper[5119]: I0220 00:12:50.034468 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"c86ef070-31a6-4f4a-abac-f81ddbed2d97\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:12:50 crc kubenswrapper[5119]: I0220 00:12:50.034578 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"c86ef070-31a6-4f4a-abac-f81ddbed2d97\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:12:50 crc kubenswrapper[5119]: I0220 00:12:50.060735 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"c86ef070-31a6-4f4a-abac-f81ddbed2d97\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:12:50 crc kubenswrapper[5119]: I0220 00:12:50.167279 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:12:50 crc kubenswrapper[5119]: I0220 00:12:50.388433 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 20 00:12:50 crc kubenswrapper[5119]: I0220 00:12:50.815605 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"c86ef070-31a6-4f4a-abac-f81ddbed2d97","Type":"ContainerStarted","Data":"57d4b757e226745a1f4b310b0494bf4e24e0672710f544496b35c7cf76852cea"} Feb 20 00:12:50 crc kubenswrapper[5119]: I0220 00:12:50.815922 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"c86ef070-31a6-4f4a-abac-f81ddbed2d97","Type":"ContainerStarted","Data":"3ccd6d772c4296cbd0bea303e6e140838663d5ade61ec77225cc93e35978419c"} Feb 20 00:12:50 crc kubenswrapper[5119]: I0220 00:12:50.834599 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=1.834580298 podStartE2EDuration="1.834580298s" podCreationTimestamp="2026-02-20 00:12:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:50.833203911 +0000 UTC m=+152.812168203" watchObservedRunningTime="2026-02-20 00:12:50.834580298 +0000 UTC m=+152.813544580" Feb 20 00:12:51 crc kubenswrapper[5119]: I0220 00:12:51.836944 5119 generic.go:358] "Generic (PLEG): container finished" podID="c86ef070-31a6-4f4a-abac-f81ddbed2d97" containerID="57d4b757e226745a1f4b310b0494bf4e24e0672710f544496b35c7cf76852cea" exitCode=0 Feb 20 00:12:51 crc kubenswrapper[5119]: I0220 00:12:51.837099 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"c86ef070-31a6-4f4a-abac-f81ddbed2d97","Type":"ContainerDied","Data":"57d4b757e226745a1f4b310b0494bf4e24e0672710f544496b35c7cf76852cea"} Feb 20 00:12:53 crc kubenswrapper[5119]: I0220 00:12:53.096361 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:12:53 crc kubenswrapper[5119]: I0220 00:12:53.180909 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kube-api-access\") pod \"c86ef070-31a6-4f4a-abac-f81ddbed2d97\" (UID: \"c86ef070-31a6-4f4a-abac-f81ddbed2d97\") " Feb 20 00:12:53 crc kubenswrapper[5119]: I0220 00:12:53.181080 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kubelet-dir\") pod \"c86ef070-31a6-4f4a-abac-f81ddbed2d97\" (UID: \"c86ef070-31a6-4f4a-abac-f81ddbed2d97\") " Feb 20 00:12:53 crc kubenswrapper[5119]: I0220 00:12:53.181193 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c86ef070-31a6-4f4a-abac-f81ddbed2d97" (UID: "c86ef070-31a6-4f4a-abac-f81ddbed2d97"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:12:53 crc kubenswrapper[5119]: I0220 00:12:53.181303 5119 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:53 crc kubenswrapper[5119]: I0220 00:12:53.195533 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c86ef070-31a6-4f4a-abac-f81ddbed2d97" (UID: "c86ef070-31a6-4f4a-abac-f81ddbed2d97"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:53 crc kubenswrapper[5119]: I0220 00:12:53.282648 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c86ef070-31a6-4f4a-abac-f81ddbed2d97-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:53 crc kubenswrapper[5119]: I0220 00:12:53.849199 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"c86ef070-31a6-4f4a-abac-f81ddbed2d97","Type":"ContainerDied","Data":"3ccd6d772c4296cbd0bea303e6e140838663d5ade61ec77225cc93e35978419c"} Feb 20 00:12:53 crc kubenswrapper[5119]: I0220 00:12:53.849622 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ccd6d772c4296cbd0bea303e6e140838663d5ade61ec77225cc93e35978419c" Feb 20 00:12:53 crc kubenswrapper[5119]: I0220 00:12:53.849334 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:12:59 crc kubenswrapper[5119]: I0220 00:12:59.484983 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-h4nz2"] Feb 20 00:13:00 crc kubenswrapper[5119]: E0220 00:13:00.151131 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod6c050f1b_62ba_4f8f_9eab_696aaf960f00.slice\": RecentStats: unable to find data in memory cache]" Feb 20 00:13:10 crc kubenswrapper[5119]: E0220 00:13:10.354119 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod6c050f1b_62ba_4f8f_9eab_696aaf960f00.slice\": RecentStats: unable to find data in memory cache]" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.366855 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.368268 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c86ef070-31a6-4f4a-abac-f81ddbed2d97" containerName="pruner" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.368305 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86ef070-31a6-4f4a-abac-f81ddbed2d97" containerName="pruner" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.368665 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="c86ef070-31a6-4f4a-abac-f81ddbed2d97" containerName="pruner" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.375224 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.378740 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.378831 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.390347 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.465425 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-kubelet-dir\") pod \"installer-12-crc\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.465673 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/372282dd-2962-4f1e-80ec-46aae632c315-kube-api-access\") pod \"installer-12-crc\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.465740 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-var-lock\") pod \"installer-12-crc\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.567693 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/372282dd-2962-4f1e-80ec-46aae632c315-kube-api-access\") pod \"installer-12-crc\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.567840 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-var-lock\") pod \"installer-12-crc\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.567929 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-kubelet-dir\") pod \"installer-12-crc\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.568014 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-var-lock\") pod \"installer-12-crc\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.568171 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-kubelet-dir\") pod \"installer-12-crc\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.605320 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/372282dd-2962-4f1e-80ec-46aae632c315-kube-api-access\") pod \"installer-12-crc\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.707410 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.936271 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 20 00:13:11 crc kubenswrapper[5119]: I0220 00:13:11.995437 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"372282dd-2962-4f1e-80ec-46aae632c315","Type":"ContainerStarted","Data":"228ca36c787336a41dc852dd144f18c1dfa2175e6d0ad3c530243f1c1b7728d7"} Feb 20 00:13:13 crc kubenswrapper[5119]: I0220 00:13:13.007751 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"372282dd-2962-4f1e-80ec-46aae632c315","Type":"ContainerStarted","Data":"c85e3277f663ce60b6a41caa448a43bc970383c747bc7cd1652a8d0410d95c42"} Feb 20 00:13:13 crc kubenswrapper[5119]: I0220 00:13:13.033271 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.033240509 podStartE2EDuration="2.033240509s" podCreationTimestamp="2026-02-20 00:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:13:13.03182761 +0000 UTC m=+175.010791962" watchObservedRunningTime="2026-02-20 00:13:13.033240509 +0000 UTC m=+175.012204831" Feb 20 00:13:22 crc kubenswrapper[5119]: I0220 00:13:22.705146 5119 ???:1] "http: TLS handshake error from 192.168.126.11:36168: no serving certificate available for the kubelet" Feb 20 00:13:24 crc kubenswrapper[5119]: I0220 00:13:24.521591 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" podUID="425ed829-90c3-45f1-b162-107726016bfd" containerName="oauth-openshift" containerID="cri-o://84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3" gracePeriod=15 Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.015135 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.070242 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-774879d89d-62jbm"] Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.071908 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="425ed829-90c3-45f1-b162-107726016bfd" containerName="oauth-openshift" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.071947 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="425ed829-90c3-45f1-b162-107726016bfd" containerName="oauth-openshift" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.072466 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="425ed829-90c3-45f1-b162-107726016bfd" containerName="oauth-openshift" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.082210 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-774879d89d-62jbm"] Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.082452 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.094181 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-cliconfig\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.094687 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-idp-0-file-data\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.094780 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-ocp-branding-template\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.094829 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-provider-selection\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.094855 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-router-certs\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.094901 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-serving-cert\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095000 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-login\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095029 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-session\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095080 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pcls\" (UniqueName: \"kubernetes.io/projected/425ed829-90c3-45f1-b162-107726016bfd-kube-api-access-4pcls\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095125 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-service-ca\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095198 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/425ed829-90c3-45f1-b162-107726016bfd-audit-dir\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095227 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-trusted-ca-bundle\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095275 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-audit-policies\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095403 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-error\") pod \"425ed829-90c3-45f1-b162-107726016bfd\" (UID: \"425ed829-90c3-45f1-b162-107726016bfd\") " Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095620 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1b82b47-5458-4aeb-b454-d4892f1dcc80-audit-dir\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095653 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095717 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.095757 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.096631 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.097073 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.097136 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.097225 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.097305 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-audit-policies\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.097369 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-router-certs\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.097389 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.097420 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-session\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.097439 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/425ed829-90c3-45f1-b162-107726016bfd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.097934 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wnsb\" (UniqueName: \"kubernetes.io/projected/a1b82b47-5458-4aeb-b454-d4892f1dcc80-kube-api-access-5wnsb\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.097991 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-service-ca\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.098049 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-template-login\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.098086 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-template-error\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.098174 5119 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/425ed829-90c3-45f1-b162-107726016bfd-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.098195 5119 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.098218 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.098450 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.098687 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.102665 5119 generic.go:358] "Generic (PLEG): container finished" podID="425ed829-90c3-45f1-b162-107726016bfd" containerID="84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3" exitCode=0 Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.103168 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" event={"ID":"425ed829-90c3-45f1-b162-107726016bfd","Type":"ContainerDied","Data":"84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3"} Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.103436 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" event={"ID":"425ed829-90c3-45f1-b162-107726016bfd","Type":"ContainerDied","Data":"319e884d9d15b35869d77785815835589c4b85560088fd241a6a116b0c952b0b"} Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.103475 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-h4nz2" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.103612 5119 scope.go:117] "RemoveContainer" containerID="84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.103770 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.111205 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.116393 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/425ed829-90c3-45f1-b162-107726016bfd-kube-api-access-4pcls" (OuterVolumeSpecName: "kube-api-access-4pcls") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "kube-api-access-4pcls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.126057 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.128638 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.133708 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.133886 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.134029 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.134271 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "425ed829-90c3-45f1-b162-107726016bfd" (UID: "425ed829-90c3-45f1-b162-107726016bfd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.149188 5119 scope.go:117] "RemoveContainer" containerID="84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3" Feb 20 00:13:25 crc kubenswrapper[5119]: E0220 00:13:25.149672 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3\": container with ID starting with 84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3 not found: ID does not exist" containerID="84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.149709 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3"} err="failed to get container status \"84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3\": rpc error: code = NotFound desc = could not find container \"84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3\": container with ID starting with 84d4b3b2ceac8f05a6447d5e2072576e00faf17ee77274040f63e026b11abdc3 not found: ID does not exist" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.199534 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.199617 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-audit-policies\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.199644 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-router-certs\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.199666 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-session\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.200392 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5wnsb\" (UniqueName: \"kubernetes.io/projected/a1b82b47-5458-4aeb-b454-d4892f1dcc80-kube-api-access-5wnsb\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.200443 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-service-ca\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.200473 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-template-login\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.200494 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-template-error\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.200531 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1b82b47-5458-4aeb-b454-d4892f1dcc80-audit-dir\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.200574 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.200620 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.200752 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1b82b47-5458-4aeb-b454-d4892f1dcc80-audit-dir\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.201571 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.201641 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.201726 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.201831 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-audit-policies\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.201925 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-service-ca\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.202072 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.202287 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.202799 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.202690 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.203251 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.203126 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.203316 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.203335 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.203351 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.203364 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.203376 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4pcls\" (UniqueName: \"kubernetes.io/projected/425ed829-90c3-45f1-b162-107726016bfd-kube-api-access-4pcls\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.203390 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.203405 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/425ed829-90c3-45f1-b162-107726016bfd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.206138 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-router-certs\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.206226 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-session\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.206358 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-template-login\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.207064 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.207519 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-template-error\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.207788 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.208784 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.211337 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1b82b47-5458-4aeb-b454-d4892f1dcc80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.226489 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wnsb\" (UniqueName: \"kubernetes.io/projected/a1b82b47-5458-4aeb-b454-d4892f1dcc80-kube-api-access-5wnsb\") pod \"oauth-openshift-774879d89d-62jbm\" (UID: \"a1b82b47-5458-4aeb-b454-d4892f1dcc80\") " pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.439464 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.465031 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-h4nz2"] Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.469653 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-h4nz2"] Feb 20 00:13:25 crc kubenswrapper[5119]: I0220 00:13:25.783023 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-774879d89d-62jbm"] Feb 20 00:13:26 crc kubenswrapper[5119]: I0220 00:13:26.113191 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" event={"ID":"a1b82b47-5458-4aeb-b454-d4892f1dcc80","Type":"ContainerStarted","Data":"fad28de9d4f561d3b2b94cbc524a3261bb0bddb5998109753a5b7f8791cb4aee"} Feb 20 00:13:26 crc kubenswrapper[5119]: I0220 00:13:26.867755 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="425ed829-90c3-45f1-b162-107726016bfd" path="/var/lib/kubelet/pods/425ed829-90c3-45f1-b162-107726016bfd/volumes" Feb 20 00:13:27 crc kubenswrapper[5119]: I0220 00:13:27.124059 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" event={"ID":"a1b82b47-5458-4aeb-b454-d4892f1dcc80","Type":"ContainerStarted","Data":"465ea1f79f44262cdf0f542503e15db85d9fe9bc77df2f0f061571ff36eef1fe"} Feb 20 00:13:27 crc kubenswrapper[5119]: I0220 00:13:27.124517 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:27 crc kubenswrapper[5119]: I0220 00:13:27.134403 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" Feb 20 00:13:27 crc kubenswrapper[5119]: I0220 00:13:27.157447 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-774879d89d-62jbm" podStartSLOduration=28.157422536 podStartE2EDuration="28.157422536s" podCreationTimestamp="2026-02-20 00:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:13:27.15464238 +0000 UTC m=+189.133606752" watchObservedRunningTime="2026-02-20 00:13:27.157422536 +0000 UTC m=+189.136386868" Feb 20 00:13:31 crc kubenswrapper[5119]: I0220 00:13:31.152136 5119 generic.go:358] "Generic (PLEG): container finished" podID="2daa98ea-b766-495c-a3e7-5d73232ddc18" containerID="498388d0e251e58c582bbef583c6d3fd706afee4c41803c47b2c9dbf351ab3be" exitCode=0 Feb 20 00:13:31 crc kubenswrapper[5119]: I0220 00:13:31.152243 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29525760-qdjzk" event={"ID":"2daa98ea-b766-495c-a3e7-5d73232ddc18","Type":"ContainerDied","Data":"498388d0e251e58c582bbef583c6d3fd706afee4c41803c47b2c9dbf351ab3be"} Feb 20 00:13:32 crc kubenswrapper[5119]: I0220 00:13:32.490963 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29525760-qdjzk" Feb 20 00:13:32 crc kubenswrapper[5119]: I0220 00:13:32.631215 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2daa98ea-b766-495c-a3e7-5d73232ddc18-serviceca\") pod \"2daa98ea-b766-495c-a3e7-5d73232ddc18\" (UID: \"2daa98ea-b766-495c-a3e7-5d73232ddc18\") " Feb 20 00:13:32 crc kubenswrapper[5119]: I0220 00:13:32.632362 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98lwk\" (UniqueName: \"kubernetes.io/projected/2daa98ea-b766-495c-a3e7-5d73232ddc18-kube-api-access-98lwk\") pod \"2daa98ea-b766-495c-a3e7-5d73232ddc18\" (UID: \"2daa98ea-b766-495c-a3e7-5d73232ddc18\") " Feb 20 00:13:32 crc kubenswrapper[5119]: I0220 00:13:32.633086 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2daa98ea-b766-495c-a3e7-5d73232ddc18-serviceca" (OuterVolumeSpecName: "serviceca") pod "2daa98ea-b766-495c-a3e7-5d73232ddc18" (UID: "2daa98ea-b766-495c-a3e7-5d73232ddc18"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:13:32 crc kubenswrapper[5119]: I0220 00:13:32.633361 5119 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2daa98ea-b766-495c-a3e7-5d73232ddc18-serviceca\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:32 crc kubenswrapper[5119]: I0220 00:13:32.643058 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2daa98ea-b766-495c-a3e7-5d73232ddc18-kube-api-access-98lwk" (OuterVolumeSpecName: "kube-api-access-98lwk") pod "2daa98ea-b766-495c-a3e7-5d73232ddc18" (UID: "2daa98ea-b766-495c-a3e7-5d73232ddc18"). InnerVolumeSpecName "kube-api-access-98lwk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:13:32 crc kubenswrapper[5119]: I0220 00:13:32.735417 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-98lwk\" (UniqueName: \"kubernetes.io/projected/2daa98ea-b766-495c-a3e7-5d73232ddc18-kube-api-access-98lwk\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:33 crc kubenswrapper[5119]: I0220 00:13:33.167009 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29525760-qdjzk" event={"ID":"2daa98ea-b766-495c-a3e7-5d73232ddc18","Type":"ContainerDied","Data":"bb5e188e3336074fdfb6dc9bd1759ec66e9c9e98907db8f100f4a18d29479017"} Feb 20 00:13:33 crc kubenswrapper[5119]: I0220 00:13:33.167089 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb5e188e3336074fdfb6dc9bd1759ec66e9c9e98907db8f100f4a18d29479017" Feb 20 00:13:33 crc kubenswrapper[5119]: I0220 00:13:33.167525 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29525760-qdjzk" Feb 20 00:13:42 crc kubenswrapper[5119]: I0220 00:13:42.161452 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:13:42 crc kubenswrapper[5119]: I0220 00:13:42.162085 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.683686 5119 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.685163 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2daa98ea-b766-495c-a3e7-5d73232ddc18" containerName="image-pruner" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.685186 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="2daa98ea-b766-495c-a3e7-5d73232ddc18" containerName="image-pruner" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.685339 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="2daa98ea-b766-495c-a3e7-5d73232ddc18" containerName="image-pruner" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.699317 5119 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.699379 5119 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.699684 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700401 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7" gracePeriod=15 Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700443 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c" gracePeriod=15 Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700587 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb" gracePeriod=15 Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700454 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c" gracePeriod=15 Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700825 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700846 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700861 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700869 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700878 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700885 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700896 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700903 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700938 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700944 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700953 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700959 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700973 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700979 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700990 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.700998 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701127 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701144 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701152 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701173 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701182 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701189 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701202 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701312 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701322 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701338 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701346 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701489 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.701501 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.702686 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b" gracePeriod=15 Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.709785 5119 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.750893 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.823186 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.823277 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.823315 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.823349 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.823454 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.823490 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.823612 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.823653 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.823681 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.823715 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.925478 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.925858 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926010 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926039 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926032 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926348 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926402 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926473 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926475 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926558 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926574 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926576 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926596 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926516 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926640 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926695 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926744 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.926875 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.927272 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:50 crc kubenswrapper[5119]: I0220 00:13:50.927975 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:51 crc kubenswrapper[5119]: I0220 00:13:51.301225 5119 generic.go:358] "Generic (PLEG): container finished" podID="372282dd-2962-4f1e-80ec-46aae632c315" containerID="c85e3277f663ce60b6a41caa448a43bc970383c747bc7cd1652a8d0410d95c42" exitCode=0 Feb 20 00:13:51 crc kubenswrapper[5119]: I0220 00:13:51.301326 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"372282dd-2962-4f1e-80ec-46aae632c315","Type":"ContainerDied","Data":"c85e3277f663ce60b6a41caa448a43bc970383c747bc7cd1652a8d0410d95c42"} Feb 20 00:13:51 crc kubenswrapper[5119]: I0220 00:13:51.302588 5119 status_manager.go:895] "Failed to get status for pod" podUID="372282dd-2962-4f1e-80ec-46aae632c315" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:51 crc kubenswrapper[5119]: I0220 00:13:51.304369 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 20 00:13:51 crc kubenswrapper[5119]: I0220 00:13:51.305951 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 20 00:13:51 crc kubenswrapper[5119]: I0220 00:13:51.306677 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c" exitCode=0 Feb 20 00:13:51 crc kubenswrapper[5119]: I0220 00:13:51.306712 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b" exitCode=0 Feb 20 00:13:51 crc kubenswrapper[5119]: I0220 00:13:51.306725 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c" exitCode=0 Feb 20 00:13:51 crc kubenswrapper[5119]: I0220 00:13:51.306740 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb" exitCode=2 Feb 20 00:13:51 crc kubenswrapper[5119]: I0220 00:13:51.306810 5119 scope.go:117] "RemoveContainer" containerID="f6194530545437982490600b50c8861471742ab842e14ab627a123778c428dcf" Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.317155 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.590143 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.591014 5119 status_manager.go:895] "Failed to get status for pod" podUID="372282dd-2962-4f1e-80ec-46aae632c315" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.754176 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-kubelet-dir\") pod \"372282dd-2962-4f1e-80ec-46aae632c315\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.754330 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "372282dd-2962-4f1e-80ec-46aae632c315" (UID: "372282dd-2962-4f1e-80ec-46aae632c315"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.754822 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-var-lock\") pod \"372282dd-2962-4f1e-80ec-46aae632c315\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.754950 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-var-lock" (OuterVolumeSpecName: "var-lock") pod "372282dd-2962-4f1e-80ec-46aae632c315" (UID: "372282dd-2962-4f1e-80ec-46aae632c315"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.755073 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/372282dd-2962-4f1e-80ec-46aae632c315-kube-api-access\") pod \"372282dd-2962-4f1e-80ec-46aae632c315\" (UID: \"372282dd-2962-4f1e-80ec-46aae632c315\") " Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.755506 5119 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-var-lock\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.755676 5119 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/372282dd-2962-4f1e-80ec-46aae632c315-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.762200 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372282dd-2962-4f1e-80ec-46aae632c315-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "372282dd-2962-4f1e-80ec-46aae632c315" (UID: "372282dd-2962-4f1e-80ec-46aae632c315"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:13:52 crc kubenswrapper[5119]: I0220 00:13:52.859315 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/372282dd-2962-4f1e-80ec-46aae632c315-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.127984 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.128910 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.130082 5119 status_manager.go:895] "Failed to get status for pod" podUID="372282dd-2962-4f1e-80ec-46aae632c315" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.130604 5119 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.264944 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.265102 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.265119 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.265336 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.265632 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.265693 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.265669 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.265935 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.265945 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.266442 5119 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.266475 5119 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.266493 5119 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.266511 5119 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.268436 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.328973 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"372282dd-2962-4f1e-80ec-46aae632c315","Type":"ContainerDied","Data":"228ca36c787336a41dc852dd144f18c1dfa2175e6d0ad3c530243f1c1b7728d7"} Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.329625 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="228ca36c787336a41dc852dd144f18c1dfa2175e6d0ad3c530243f1c1b7728d7" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.329042 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.334282 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.336142 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7" exitCode=0 Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.336295 5119 scope.go:117] "RemoveContainer" containerID="0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.336390 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.336756 5119 status_manager.go:895] "Failed to get status for pod" podUID="372282dd-2962-4f1e-80ec-46aae632c315" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.337365 5119 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.359366 5119 scope.go:117] "RemoveContainer" containerID="c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.366143 5119 status_manager.go:895] "Failed to get status for pod" podUID="372282dd-2962-4f1e-80ec-46aae632c315" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.366745 5119 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.367494 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.378785 5119 scope.go:117] "RemoveContainer" containerID="0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.393729 5119 scope.go:117] "RemoveContainer" containerID="f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.408694 5119 scope.go:117] "RemoveContainer" containerID="c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.428842 5119 scope.go:117] "RemoveContainer" containerID="359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.501040 5119 scope.go:117] "RemoveContainer" containerID="0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.501580 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c\": container with ID starting with 0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c not found: ID does not exist" containerID="0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.501632 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c"} err="failed to get container status \"0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c\": rpc error: code = NotFound desc = could not find container \"0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c\": container with ID starting with 0f126153d3b23d4d86fbab667fa433e3301f373417be66da66ddb4613d03693c not found: ID does not exist" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.501667 5119 scope.go:117] "RemoveContainer" containerID="c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.502275 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b\": container with ID starting with c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b not found: ID does not exist" containerID="c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.502311 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b"} err="failed to get container status \"c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b\": rpc error: code = NotFound desc = could not find container \"c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b\": container with ID starting with c46d0135c3a754013f0e3f6bd50f94d244e87fcb264ea59ecac073594c52c17b not found: ID does not exist" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.502335 5119 scope.go:117] "RemoveContainer" containerID="0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.502639 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c\": container with ID starting with 0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c not found: ID does not exist" containerID="0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.502663 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c"} err="failed to get container status \"0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c\": rpc error: code = NotFound desc = could not find container \"0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c\": container with ID starting with 0750a3e5db474c253e5acbbb6cd8b55f6a546a4d0dcb519fe8522e83acadaf5c not found: ID does not exist" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.502683 5119 scope.go:117] "RemoveContainer" containerID="f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.503001 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb\": container with ID starting with f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb not found: ID does not exist" containerID="f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.503061 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb"} err="failed to get container status \"f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb\": rpc error: code = NotFound desc = could not find container \"f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb\": container with ID starting with f5b8c1c18a290a66db28dd2beef30b9fad2e2a11aaa57fa0cf9af61e3569c9cb not found: ID does not exist" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.503100 5119 scope.go:117] "RemoveContainer" containerID="c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.503389 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7\": container with ID starting with c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7 not found: ID does not exist" containerID="c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.503410 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7"} err="failed to get container status \"c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7\": rpc error: code = NotFound desc = could not find container \"c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7\": container with ID starting with c117f2e55fd27f6545f61b9e5a9b4d4082c31c133a91abee8c99eec651ffd5e7 not found: ID does not exist" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.503422 5119 scope.go:117] "RemoveContainer" containerID="359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.503839 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605\": container with ID starting with 359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605 not found: ID does not exist" containerID="359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.503863 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605"} err="failed to get container status \"359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605\": rpc error: code = NotFound desc = could not find container \"359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605\": container with ID starting with 359d6c31c66888047499ad5422caffca5098a97e7e28b76b359cfec7390c5605 not found: ID does not exist" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.657036 5119 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.657683 5119 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.658114 5119 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.658769 5119 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.659124 5119 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:53 crc kubenswrapper[5119]: I0220 00:13:53.659170 5119 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.659763 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Feb 20 00:13:53 crc kubenswrapper[5119]: E0220 00:13:53.860905 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Feb 20 00:13:54 crc kubenswrapper[5119]: E0220 00:13:54.262466 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Feb 20 00:13:54 crc kubenswrapper[5119]: I0220 00:13:54.868589 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Feb 20 00:13:55 crc kubenswrapper[5119]: E0220 00:13:55.065132 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Feb 20 00:13:55 crc kubenswrapper[5119]: E0220 00:13:55.752952 5119 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:55 crc kubenswrapper[5119]: I0220 00:13:55.753770 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:55 crc kubenswrapper[5119]: W0220 00:13:55.776993 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-bc876137316d5eed87f651fe4d88ed9af9eaa877110153d9e4c072249a2198cb WatchSource:0}: Error finding container bc876137316d5eed87f651fe4d88ed9af9eaa877110153d9e4c072249a2198cb: Status 404 returned error can't find the container with id bc876137316d5eed87f651fe4d88ed9af9eaa877110153d9e4c072249a2198cb Feb 20 00:13:55 crc kubenswrapper[5119]: E0220 00:13:55.781032 5119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895cc18e7c3ade0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:13:55.779800544 +0000 UTC m=+217.758764846,LastTimestamp:2026-02-20 00:13:55.779800544 +0000 UTC m=+217.758764846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:13:56 crc kubenswrapper[5119]: I0220 00:13:56.362717 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd"} Feb 20 00:13:56 crc kubenswrapper[5119]: I0220 00:13:56.362840 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"bc876137316d5eed87f651fe4d88ed9af9eaa877110153d9e4c072249a2198cb"} Feb 20 00:13:56 crc kubenswrapper[5119]: I0220 00:13:56.363430 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:56 crc kubenswrapper[5119]: I0220 00:13:56.364091 5119 status_manager.go:895] "Failed to get status for pod" podUID="372282dd-2962-4f1e-80ec-46aae632c315" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:56 crc kubenswrapper[5119]: E0220 00:13:56.364255 5119 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:13:56 crc kubenswrapper[5119]: E0220 00:13:56.666025 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="3.2s" Feb 20 00:13:57 crc kubenswrapper[5119]: E0220 00:13:57.920132 5119 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" volumeName="registry-storage" Feb 20 00:13:58 crc kubenswrapper[5119]: I0220 00:13:58.866824 5119 status_manager.go:895] "Failed to get status for pod" podUID="372282dd-2962-4f1e-80ec-46aae632c315" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:13:59 crc kubenswrapper[5119]: E0220 00:13:59.244050 5119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895cc18e7c3ade0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:13:55.779800544 +0000 UTC m=+217.758764846,LastTimestamp:2026-02-20 00:13:55.779800544 +0000 UTC m=+217.758764846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:13:59 crc kubenswrapper[5119]: E0220 00:13:59.867346 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="6.4s" Feb 20 00:14:03 crc kubenswrapper[5119]: I0220 00:14:03.419901 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:14:03 crc kubenswrapper[5119]: I0220 00:14:03.420830 5119 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="ac6691d283aafa26b76bde45116abaa5c977877e0a499e7f032c8ecbb241d0cf" exitCode=1 Feb 20 00:14:03 crc kubenswrapper[5119]: I0220 00:14:03.421009 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"ac6691d283aafa26b76bde45116abaa5c977877e0a499e7f032c8ecbb241d0cf"} Feb 20 00:14:03 crc kubenswrapper[5119]: I0220 00:14:03.422167 5119 scope.go:117] "RemoveContainer" containerID="ac6691d283aafa26b76bde45116abaa5c977877e0a499e7f032c8ecbb241d0cf" Feb 20 00:14:03 crc kubenswrapper[5119]: I0220 00:14:03.422469 5119 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:14:03 crc kubenswrapper[5119]: I0220 00:14:03.423064 5119 status_manager.go:895] "Failed to get status for pod" podUID="372282dd-2962-4f1e-80ec-46aae632c315" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:14:04 crc kubenswrapper[5119]: I0220 00:14:04.431417 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:14:04 crc kubenswrapper[5119]: I0220 00:14:04.431990 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"c392f0323ea0032fc4a7c6e71d63506bec3041e79b15d4eb9342dd4ea3baeb03"} Feb 20 00:14:04 crc kubenswrapper[5119]: I0220 00:14:04.433354 5119 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:14:04 crc kubenswrapper[5119]: I0220 00:14:04.433900 5119 status_manager.go:895] "Failed to get status for pod" podUID="372282dd-2962-4f1e-80ec-46aae632c315" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:14:04 crc kubenswrapper[5119]: I0220 00:14:04.868742 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:04 crc kubenswrapper[5119]: I0220 00:14:04.870291 5119 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:14:04 crc kubenswrapper[5119]: I0220 00:14:04.871086 5119 status_manager.go:895] "Failed to get status for pod" podUID="372282dd-2962-4f1e-80ec-46aae632c315" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:14:04 crc kubenswrapper[5119]: I0220 00:14:04.893086 5119 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a71b238f-114f-4e68-a564-732cd30a1fee" Feb 20 00:14:04 crc kubenswrapper[5119]: I0220 00:14:04.893143 5119 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a71b238f-114f-4e68-a564-732cd30a1fee" Feb 20 00:14:04 crc kubenswrapper[5119]: E0220 00:14:04.893845 5119 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:04 crc kubenswrapper[5119]: I0220 00:14:04.894346 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:04 crc kubenswrapper[5119]: W0220 00:14:04.916882 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-1e7460646053c9ced35d04c13e141867a7c57956b0f78b29dc7e0ba434746b92 WatchSource:0}: Error finding container 1e7460646053c9ced35d04c13e141867a7c57956b0f78b29dc7e0ba434746b92: Status 404 returned error can't find the container with id 1e7460646053c9ced35d04c13e141867a7c57956b0f78b29dc7e0ba434746b92 Feb 20 00:14:05 crc kubenswrapper[5119]: I0220 00:14:05.447053 5119 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="a3851636a7db27499cd465ce79cf3e2f4679c1ece81b7eb4c9420f6f68840bc2" exitCode=0 Feb 20 00:14:05 crc kubenswrapper[5119]: I0220 00:14:05.447187 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"a3851636a7db27499cd465ce79cf3e2f4679c1ece81b7eb4c9420f6f68840bc2"} Feb 20 00:14:05 crc kubenswrapper[5119]: I0220 00:14:05.447274 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1e7460646053c9ced35d04c13e141867a7c57956b0f78b29dc7e0ba434746b92"} Feb 20 00:14:05 crc kubenswrapper[5119]: I0220 00:14:05.447823 5119 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a71b238f-114f-4e68-a564-732cd30a1fee" Feb 20 00:14:05 crc kubenswrapper[5119]: I0220 00:14:05.447844 5119 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a71b238f-114f-4e68-a564-732cd30a1fee" Feb 20 00:14:05 crc kubenswrapper[5119]: I0220 00:14:05.448631 5119 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:14:05 crc kubenswrapper[5119]: E0220 00:14:05.448660 5119 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:05 crc kubenswrapper[5119]: I0220 00:14:05.449164 5119 status_manager.go:895] "Failed to get status for pod" podUID="372282dd-2962-4f1e-80ec-46aae632c315" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 20 00:14:06 crc kubenswrapper[5119]: I0220 00:14:06.460695 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"444ba96490c64bbe79745b1d0949f9c019719f84b3ddda7e839383f0f8b32de8"} Feb 20 00:14:06 crc kubenswrapper[5119]: I0220 00:14:06.461219 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6dd4e0e97823afeaf29f238d5a9cded2eac6a2a9f87e82ecab5df70983487968"} Feb 20 00:14:06 crc kubenswrapper[5119]: I0220 00:14:06.461235 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d46d84666edf9b181e38bb27580d6e83f7c19b2fa3f6a2aa00671751cd8411c2"} Feb 20 00:14:07 crc kubenswrapper[5119]: I0220 00:14:07.489803 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"57d9c7f4dd1dd1a0ed0f497780719ab7a3285050111dd9e07f38c4579fd135bc"} Feb 20 00:14:07 crc kubenswrapper[5119]: I0220 00:14:07.490716 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1e6fc78bb8d2c119d19e84c53808fd19349b05084ba85db9d5d5a29cbac9999a"} Feb 20 00:14:07 crc kubenswrapper[5119]: I0220 00:14:07.491179 5119 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a71b238f-114f-4e68-a564-732cd30a1fee" Feb 20 00:14:07 crc kubenswrapper[5119]: I0220 00:14:07.491273 5119 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a71b238f-114f-4e68-a564-732cd30a1fee" Feb 20 00:14:07 crc kubenswrapper[5119]: I0220 00:14:07.491517 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:09 crc kubenswrapper[5119]: I0220 00:14:09.895474 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:09 crc kubenswrapper[5119]: I0220 00:14:09.896374 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:09 crc kubenswrapper[5119]: I0220 00:14:09.902754 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:10 crc kubenswrapper[5119]: I0220 00:14:10.871150 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:14:12 crc kubenswrapper[5119]: I0220 00:14:12.160798 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:14:12 crc kubenswrapper[5119]: I0220 00:14:12.160897 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:14:12 crc kubenswrapper[5119]: I0220 00:14:12.508438 5119 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:12 crc kubenswrapper[5119]: I0220 00:14:12.508484 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:12 crc kubenswrapper[5119]: I0220 00:14:12.602135 5119 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="eeae028e-00fc-4378-9392-4dfc9386c5bb" Feb 20 00:14:12 crc kubenswrapper[5119]: I0220 00:14:12.771000 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:14:12 crc kubenswrapper[5119]: I0220 00:14:12.775449 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:14:13 crc kubenswrapper[5119]: I0220 00:14:13.543392 5119 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a71b238f-114f-4e68-a564-732cd30a1fee" Feb 20 00:14:13 crc kubenswrapper[5119]: I0220 00:14:13.543671 5119 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a71b238f-114f-4e68-a564-732cd30a1fee" Feb 20 00:14:13 crc kubenswrapper[5119]: I0220 00:14:13.546315 5119 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="eeae028e-00fc-4378-9392-4dfc9386c5bb" Feb 20 00:14:13 crc kubenswrapper[5119]: I0220 00:14:13.548499 5119 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://d46d84666edf9b181e38bb27580d6e83f7c19b2fa3f6a2aa00671751cd8411c2" Feb 20 00:14:13 crc kubenswrapper[5119]: I0220 00:14:13.548600 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:13 crc kubenswrapper[5119]: I0220 00:14:13.548802 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:14:14 crc kubenswrapper[5119]: I0220 00:14:14.549139 5119 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a71b238f-114f-4e68-a564-732cd30a1fee" Feb 20 00:14:14 crc kubenswrapper[5119]: I0220 00:14:14.549181 5119 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a71b238f-114f-4e68-a564-732cd30a1fee" Feb 20 00:14:14 crc kubenswrapper[5119]: I0220 00:14:14.554015 5119 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="eeae028e-00fc-4378-9392-4dfc9386c5bb" Feb 20 00:14:22 crc kubenswrapper[5119]: I0220 00:14:22.988755 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 20 00:14:22 crc kubenswrapper[5119]: I0220 00:14:22.989171 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 20 00:14:22 crc kubenswrapper[5119]: I0220 00:14:22.989351 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 20 00:14:23 crc kubenswrapper[5119]: I0220 00:14:23.118461 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 20 00:14:23 crc kubenswrapper[5119]: I0220 00:14:23.419737 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 20 00:14:23 crc kubenswrapper[5119]: I0220 00:14:23.465811 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 20 00:14:23 crc kubenswrapper[5119]: I0220 00:14:23.544244 5119 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:14:23 crc kubenswrapper[5119]: I0220 00:14:23.616228 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:23 crc kubenswrapper[5119]: I0220 00:14:23.726315 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 20 00:14:24 crc kubenswrapper[5119]: I0220 00:14:24.005784 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 20 00:14:24 crc kubenswrapper[5119]: I0220 00:14:24.179333 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 20 00:14:24 crc kubenswrapper[5119]: I0220 00:14:24.432992 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 20 00:14:24 crc kubenswrapper[5119]: I0220 00:14:24.478120 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 20 00:14:24 crc kubenswrapper[5119]: I0220 00:14:24.818892 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:24 crc kubenswrapper[5119]: I0220 00:14:24.872991 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 20 00:14:24 crc kubenswrapper[5119]: I0220 00:14:24.887783 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 20 00:14:24 crc kubenswrapper[5119]: I0220 00:14:24.955226 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:25 crc kubenswrapper[5119]: I0220 00:14:25.277865 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 20 00:14:25 crc kubenswrapper[5119]: I0220 00:14:25.466391 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 20 00:14:25 crc kubenswrapper[5119]: I0220 00:14:25.505996 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 20 00:14:25 crc kubenswrapper[5119]: I0220 00:14:25.566016 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:25 crc kubenswrapper[5119]: I0220 00:14:25.668195 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 20 00:14:25 crc kubenswrapper[5119]: I0220 00:14:25.744916 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 20 00:14:25 crc kubenswrapper[5119]: I0220 00:14:25.762462 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 20 00:14:25 crc kubenswrapper[5119]: I0220 00:14:25.993195 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.090425 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.117026 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.123810 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.240612 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.240673 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.252311 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.263341 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.294242 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.295401 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.307099 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.380907 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.423883 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.435389 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.531124 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.716195 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.745730 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.861848 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.877912 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 20 00:14:26 crc kubenswrapper[5119]: I0220 00:14:26.889269 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.065089 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.073651 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.171462 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.178229 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.194439 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.365978 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.395825 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.428403 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.635854 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.638671 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.652615 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.680705 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.755715 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.779307 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.886291 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.908259 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.908620 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.947193 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 20 00:14:27 crc kubenswrapper[5119]: I0220 00:14:27.950182 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.080338 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.159815 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.199333 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.226109 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.303192 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.306812 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.448853 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.553620 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.603742 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.627178 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.645639 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.647317 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.796790 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.832077 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.854974 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.866821 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.868224 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.890565 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.901875 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.926396 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.933969 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.973047 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.994845 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 20 00:14:28 crc kubenswrapper[5119]: I0220 00:14:28.998722 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.003851 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.005976 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.069969 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.152743 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.234518 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.266960 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.293577 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.315364 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.364862 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.372914 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.572365 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.811305 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.885458 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 20 00:14:29 crc kubenswrapper[5119]: I0220 00:14:29.945799 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.101364 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.127261 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.146893 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.146931 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.168083 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.184667 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.220241 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.247902 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.295348 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.319895 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.335122 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.347225 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.352266 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.413211 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.541348 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.598740 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.649133 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.677619 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.708151 5119 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.781831 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.795606 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.797976 5119 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.801268 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.808612 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.808731 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.813952 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:14:30 crc kubenswrapper[5119]: I0220 00:14:30.853487 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.853456807 podStartE2EDuration="18.853456807s" podCreationTimestamp="2026-02-20 00:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:14:30.849721743 +0000 UTC m=+252.828686045" watchObservedRunningTime="2026-02-20 00:14:30.853456807 +0000 UTC m=+252.832421279" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.079428 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.087432 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.115572 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.172991 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.175753 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.177882 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.316428 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.317534 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.389980 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.424508 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.431915 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.488423 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.541428 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.616727 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.674813 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.687007 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.721934 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.729563 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.735625 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.755448 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.776583 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.797888 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.919156 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.935285 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 20 00:14:31 crc kubenswrapper[5119]: I0220 00:14:31.988637 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.114451 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.156798 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.180256 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.237580 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.253979 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.369265 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.391097 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.410979 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.428363 5119 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.499665 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.525803 5119 ???:1] "http: TLS handshake error from 192.168.126.11:52016: no serving certificate available for the kubelet" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.548699 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.556664 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.638522 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.692290 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.698708 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.755710 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.773457 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.821179 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 20 00:14:32 crc kubenswrapper[5119]: I0220 00:14:32.909725 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.001612 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.049568 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.074140 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.081910 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.275616 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.294275 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.343116 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.344977 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.453897 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.483006 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.541916 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.553672 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.556185 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.678308 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.680203 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.699162 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.755812 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.772837 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.816690 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.829221 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.910058 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.928094 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.935146 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.961458 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.964651 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 20 00:14:33 crc kubenswrapper[5119]: I0220 00:14:33.995583 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.013402 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.020506 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.247901 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.273339 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.280104 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.285817 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.386222 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.439872 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.506024 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.557305 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.575118 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.641097 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.728294 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.771898 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.964669 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 20 00:14:34 crc kubenswrapper[5119]: I0220 00:14:34.996986 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.006228 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.126702 5119 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.126998 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd" gracePeriod=5 Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.309724 5119 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.416460 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.426374 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.427450 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.584458 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.642049 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.694473 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.730827 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.748760 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.858458 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.899245 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.903622 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:35 crc kubenswrapper[5119]: I0220 00:14:35.995853 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 20 00:14:36 crc kubenswrapper[5119]: I0220 00:14:36.039974 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 20 00:14:36 crc kubenswrapper[5119]: I0220 00:14:36.084366 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 20 00:14:36 crc kubenswrapper[5119]: I0220 00:14:36.121027 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 20 00:14:36 crc kubenswrapper[5119]: I0220 00:14:36.137584 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 20 00:14:36 crc kubenswrapper[5119]: I0220 00:14:36.382039 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 20 00:14:36 crc kubenswrapper[5119]: I0220 00:14:36.403097 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 20 00:14:36 crc kubenswrapper[5119]: I0220 00:14:36.680077 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 20 00:14:36 crc kubenswrapper[5119]: I0220 00:14:36.739900 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 20 00:14:36 crc kubenswrapper[5119]: I0220 00:14:36.797368 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 20 00:14:36 crc kubenswrapper[5119]: I0220 00:14:36.913857 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 20 00:14:37 crc kubenswrapper[5119]: I0220 00:14:37.078856 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 20 00:14:37 crc kubenswrapper[5119]: I0220 00:14:37.141961 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 20 00:14:37 crc kubenswrapper[5119]: I0220 00:14:37.375928 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 20 00:14:37 crc kubenswrapper[5119]: I0220 00:14:37.395093 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 20 00:14:37 crc kubenswrapper[5119]: I0220 00:14:37.549955 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 20 00:14:37 crc kubenswrapper[5119]: I0220 00:14:37.683612 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:37 crc kubenswrapper[5119]: I0220 00:14:37.731759 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 20 00:14:38 crc kubenswrapper[5119]: I0220 00:14:38.082522 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 20 00:14:38 crc kubenswrapper[5119]: I0220 00:14:38.288952 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 20 00:14:38 crc kubenswrapper[5119]: I0220 00:14:38.334300 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 20 00:14:38 crc kubenswrapper[5119]: I0220 00:14:38.353825 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 20 00:14:38 crc kubenswrapper[5119]: I0220 00:14:38.376628 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 20 00:14:38 crc kubenswrapper[5119]: I0220 00:14:38.391222 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 20 00:14:38 crc kubenswrapper[5119]: I0220 00:14:38.508707 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 20 00:14:38 crc kubenswrapper[5119]: I0220 00:14:38.710532 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 20 00:14:38 crc kubenswrapper[5119]: I0220 00:14:38.733749 5119 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:14:38 crc kubenswrapper[5119]: I0220 00:14:38.902232 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 20 00:14:39 crc kubenswrapper[5119]: I0220 00:14:39.287254 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 20 00:14:39 crc kubenswrapper[5119]: I0220 00:14:39.761880 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.423910 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.723614 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.724188 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.727302 5119 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.862919 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.863017 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.863141 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.863188 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.863281 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.863313 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.863381 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.863403 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.863536 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.864246 5119 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.864283 5119 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.864308 5119 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.864332 5119 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.880184 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:14:40 crc kubenswrapper[5119]: I0220 00:14:40.965630 5119 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:41 crc kubenswrapper[5119]: I0220 00:14:41.134964 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 20 00:14:41 crc kubenswrapper[5119]: I0220 00:14:41.135062 5119 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd" exitCode=137 Feb 20 00:14:41 crc kubenswrapper[5119]: I0220 00:14:41.135176 5119 scope.go:117] "RemoveContainer" containerID="ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd" Feb 20 00:14:41 crc kubenswrapper[5119]: I0220 00:14:41.135313 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:14:41 crc kubenswrapper[5119]: I0220 00:14:41.136977 5119 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 20 00:14:41 crc kubenswrapper[5119]: I0220 00:14:41.164038 5119 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 20 00:14:41 crc kubenswrapper[5119]: I0220 00:14:41.177672 5119 scope.go:117] "RemoveContainer" containerID="ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd" Feb 20 00:14:41 crc kubenswrapper[5119]: E0220 00:14:41.178442 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd\": container with ID starting with ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd not found: ID does not exist" containerID="ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd" Feb 20 00:14:41 crc kubenswrapper[5119]: I0220 00:14:41.178573 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd"} err="failed to get container status \"ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd\": rpc error: code = NotFound desc = could not find container \"ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd\": container with ID starting with ef9281a405947f0a594e8f2fce43f59e8f830d5688d675347733d4c7cec59cdd not found: ID does not exist" Feb 20 00:14:42 crc kubenswrapper[5119]: I0220 00:14:42.161057 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:14:42 crc kubenswrapper[5119]: I0220 00:14:42.161590 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:14:42 crc kubenswrapper[5119]: I0220 00:14:42.161672 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:14:42 crc kubenswrapper[5119]: I0220 00:14:42.162692 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e72863f3bb34d69a32e4bf16d58f08f3318fc63f4aed8833baffafd71c833abb"} pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:14:42 crc kubenswrapper[5119]: I0220 00:14:42.162822 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" containerID="cri-o://e72863f3bb34d69a32e4bf16d58f08f3318fc63f4aed8833baffafd71c833abb" gracePeriod=600 Feb 20 00:14:42 crc kubenswrapper[5119]: I0220 00:14:42.880978 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Feb 20 00:14:43 crc kubenswrapper[5119]: I0220 00:14:43.152184 5119 generic.go:358] "Generic (PLEG): container finished" podID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerID="e72863f3bb34d69a32e4bf16d58f08f3318fc63f4aed8833baffafd71c833abb" exitCode=0 Feb 20 00:14:43 crc kubenswrapper[5119]: I0220 00:14:43.152273 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerDied","Data":"e72863f3bb34d69a32e4bf16d58f08f3318fc63f4aed8833baffafd71c833abb"} Feb 20 00:14:43 crc kubenswrapper[5119]: I0220 00:14:43.152315 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerStarted","Data":"7379ef2df3511a5b4a842ddfae4a6c59f8bb16e8775afbafe4bb7b62e106daae"} Feb 20 00:14:44 crc kubenswrapper[5119]: I0220 00:14:44.659658 5119 ???:1] "http: TLS handshake error from 192.168.126.11:50100: no serving certificate available for the kubelet" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.204114 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7"] Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.205971 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.205990 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.206012 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="372282dd-2962-4f1e-80ec-46aae632c315" containerName="installer" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.206021 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="372282dd-2962-4f1e-80ec-46aae632c315" containerName="installer" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.206132 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="372282dd-2962-4f1e-80ec-46aae632c315" containerName="installer" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.206152 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.210575 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.214820 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.215138 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7"] Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.216816 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.303927 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74e2d6c2-49df-4803-9762-13fc9e153586-secret-volume\") pod \"collect-profiles-29525775-8tnl7\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.304119 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74e2d6c2-49df-4803-9762-13fc9e153586-config-volume\") pod \"collect-profiles-29525775-8tnl7\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.304204 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj2cs\" (UniqueName: \"kubernetes.io/projected/74e2d6c2-49df-4803-9762-13fc9e153586-kube-api-access-qj2cs\") pod \"collect-profiles-29525775-8tnl7\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.406024 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74e2d6c2-49df-4803-9762-13fc9e153586-config-volume\") pod \"collect-profiles-29525775-8tnl7\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.406613 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qj2cs\" (UniqueName: \"kubernetes.io/projected/74e2d6c2-49df-4803-9762-13fc9e153586-kube-api-access-qj2cs\") pod \"collect-profiles-29525775-8tnl7\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.406760 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74e2d6c2-49df-4803-9762-13fc9e153586-secret-volume\") pod \"collect-profiles-29525775-8tnl7\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.407467 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74e2d6c2-49df-4803-9762-13fc9e153586-config-volume\") pod \"collect-profiles-29525775-8tnl7\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.414614 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74e2d6c2-49df-4803-9762-13fc9e153586-secret-volume\") pod \"collect-profiles-29525775-8tnl7\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.425954 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj2cs\" (UniqueName: \"kubernetes.io/projected/74e2d6c2-49df-4803-9762-13fc9e153586-kube-api-access-qj2cs\") pod \"collect-profiles-29525775-8tnl7\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.546657 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:00 crc kubenswrapper[5119]: I0220 00:15:00.814956 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7"] Feb 20 00:15:01 crc kubenswrapper[5119]: I0220 00:15:01.274327 5119 generic.go:358] "Generic (PLEG): container finished" podID="74e2d6c2-49df-4803-9762-13fc9e153586" containerID="0b33513768f87e92a39a8c6d4e272c90baa8e0569e08e043d5b7928039444ab8" exitCode=0 Feb 20 00:15:01 crc kubenswrapper[5119]: I0220 00:15:01.274456 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" event={"ID":"74e2d6c2-49df-4803-9762-13fc9e153586","Type":"ContainerDied","Data":"0b33513768f87e92a39a8c6d4e272c90baa8e0569e08e043d5b7928039444ab8"} Feb 20 00:15:01 crc kubenswrapper[5119]: I0220 00:15:01.274520 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" event={"ID":"74e2d6c2-49df-4803-9762-13fc9e153586","Type":"ContainerStarted","Data":"6c2aae75a07ad2f490785843f99edeb3e9982c04d2a2a4b6f287a02e0fa381a7"} Feb 20 00:15:02 crc kubenswrapper[5119]: I0220 00:15:02.562872 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:02 crc kubenswrapper[5119]: I0220 00:15:02.637300 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74e2d6c2-49df-4803-9762-13fc9e153586-secret-volume\") pod \"74e2d6c2-49df-4803-9762-13fc9e153586\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " Feb 20 00:15:02 crc kubenswrapper[5119]: I0220 00:15:02.637444 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj2cs\" (UniqueName: \"kubernetes.io/projected/74e2d6c2-49df-4803-9762-13fc9e153586-kube-api-access-qj2cs\") pod \"74e2d6c2-49df-4803-9762-13fc9e153586\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " Feb 20 00:15:02 crc kubenswrapper[5119]: I0220 00:15:02.637685 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74e2d6c2-49df-4803-9762-13fc9e153586-config-volume\") pod \"74e2d6c2-49df-4803-9762-13fc9e153586\" (UID: \"74e2d6c2-49df-4803-9762-13fc9e153586\") " Feb 20 00:15:02 crc kubenswrapper[5119]: I0220 00:15:02.639687 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e2d6c2-49df-4803-9762-13fc9e153586-config-volume" (OuterVolumeSpecName: "config-volume") pod "74e2d6c2-49df-4803-9762-13fc9e153586" (UID: "74e2d6c2-49df-4803-9762-13fc9e153586"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:15:02 crc kubenswrapper[5119]: I0220 00:15:02.644811 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e2d6c2-49df-4803-9762-13fc9e153586-kube-api-access-qj2cs" (OuterVolumeSpecName: "kube-api-access-qj2cs") pod "74e2d6c2-49df-4803-9762-13fc9e153586" (UID: "74e2d6c2-49df-4803-9762-13fc9e153586"). InnerVolumeSpecName "kube-api-access-qj2cs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:15:02 crc kubenswrapper[5119]: I0220 00:15:02.645228 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e2d6c2-49df-4803-9762-13fc9e153586-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "74e2d6c2-49df-4803-9762-13fc9e153586" (UID: "74e2d6c2-49df-4803-9762-13fc9e153586"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:15:02 crc kubenswrapper[5119]: I0220 00:15:02.739497 5119 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74e2d6c2-49df-4803-9762-13fc9e153586-config-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:02 crc kubenswrapper[5119]: I0220 00:15:02.739579 5119 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74e2d6c2-49df-4803-9762-13fc9e153586-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:02 crc kubenswrapper[5119]: I0220 00:15:02.739597 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qj2cs\" (UniqueName: \"kubernetes.io/projected/74e2d6c2-49df-4803-9762-13fc9e153586-kube-api-access-qj2cs\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.289871 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" event={"ID":"74e2d6c2-49df-4803-9762-13fc9e153586","Type":"ContainerDied","Data":"6c2aae75a07ad2f490785843f99edeb3e9982c04d2a2a4b6f287a02e0fa381a7"} Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.290356 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c2aae75a07ad2f490785843f99edeb3e9982c04d2a2a4b6f287a02e0fa381a7" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.290458 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-8tnl7" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.375679 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-45lmz"] Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.376177 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" podUID="545f05c6-70cf-4ce3-ad50-ad4e264680ca" containerName="controller-manager" containerID="cri-o://bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601" gracePeriod=30 Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.382421 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr"] Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.382806 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" podUID="fef1a991-20ef-447f-94d6-c18c5f875ae8" containerName="route-controller-manager" containerID="cri-o://f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5" gracePeriod=30 Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.755344 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.792928 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h"] Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.793977 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74e2d6c2-49df-4803-9762-13fc9e153586" containerName="collect-profiles" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.794011 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e2d6c2-49df-4803-9762-13fc9e153586" containerName="collect-profiles" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.794037 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fef1a991-20ef-447f-94d6-c18c5f875ae8" containerName="route-controller-manager" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.794047 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef1a991-20ef-447f-94d6-c18c5f875ae8" containerName="route-controller-manager" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.794199 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="fef1a991-20ef-447f-94d6-c18c5f875ae8" containerName="route-controller-manager" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.794228 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="74e2d6c2-49df-4803-9762-13fc9e153586" containerName="collect-profiles" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.798876 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.801617 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h"] Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.823367 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.851810 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-748bd4bc57-wswgt"] Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.853020 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-client-ca\") pod \"fef1a991-20ef-447f-94d6-c18c5f875ae8\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.853101 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef1a991-20ef-447f-94d6-c18c5f875ae8-serving-cert\") pod \"fef1a991-20ef-447f-94d6-c18c5f875ae8\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.853037 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="545f05c6-70cf-4ce3-ad50-ad4e264680ca" containerName="controller-manager" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.853183 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="545f05c6-70cf-4ce3-ad50-ad4e264680ca" containerName="controller-manager" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.853339 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="545f05c6-70cf-4ce3-ad50-ad4e264680ca" containerName="controller-manager" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.854296 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fef1a991-20ef-447f-94d6-c18c5f875ae8-tmp\") pod \"fef1a991-20ef-447f-94d6-c18c5f875ae8\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.854308 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-client-ca" (OuterVolumeSpecName: "client-ca") pod "fef1a991-20ef-447f-94d6-c18c5f875ae8" (UID: "fef1a991-20ef-447f-94d6-c18c5f875ae8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.854464 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-config\") pod \"fef1a991-20ef-447f-94d6-c18c5f875ae8\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.854649 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wknch\" (UniqueName: \"kubernetes.io/projected/fef1a991-20ef-447f-94d6-c18c5f875ae8-kube-api-access-wknch\") pod \"fef1a991-20ef-447f-94d6-c18c5f875ae8\" (UID: \"fef1a991-20ef-447f-94d6-c18c5f875ae8\") " Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.854859 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef1a991-20ef-447f-94d6-c18c5f875ae8-tmp" (OuterVolumeSpecName: "tmp") pod "fef1a991-20ef-447f-94d6-c18c5f875ae8" (UID: "fef1a991-20ef-447f-94d6-c18c5f875ae8"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.855311 5119 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.855339 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fef1a991-20ef-447f-94d6-c18c5f875ae8-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.855289 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-config" (OuterVolumeSpecName: "config") pod "fef1a991-20ef-447f-94d6-c18c5f875ae8" (UID: "fef1a991-20ef-447f-94d6-c18c5f875ae8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.856679 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.861061 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef1a991-20ef-447f-94d6-c18c5f875ae8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fef1a991-20ef-447f-94d6-c18c5f875ae8" (UID: "fef1a991-20ef-447f-94d6-c18c5f875ae8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.865482 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748bd4bc57-wswgt"] Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.866793 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef1a991-20ef-447f-94d6-c18c5f875ae8-kube-api-access-wknch" (OuterVolumeSpecName: "kube-api-access-wknch") pod "fef1a991-20ef-447f-94d6-c18c5f875ae8" (UID: "fef1a991-20ef-447f-94d6-c18c5f875ae8"). InnerVolumeSpecName "kube-api-access-wknch". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.955878 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-config\") pod \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.955944 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-proxy-ca-bundles\") pod \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956012 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/545f05c6-70cf-4ce3-ad50-ad4e264680ca-serving-cert\") pod \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956049 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9zhx\" (UniqueName: \"kubernetes.io/projected/545f05c6-70cf-4ce3-ad50-ad4e264680ca-kube-api-access-f9zhx\") pod \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956155 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/545f05c6-70cf-4ce3-ad50-ad4e264680ca-tmp\") pod \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956173 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-client-ca\") pod \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\" (UID: \"545f05c6-70cf-4ce3-ad50-ad4e264680ca\") " Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956280 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j222\" (UniqueName: \"kubernetes.io/projected/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-kube-api-access-8j222\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956323 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhfb5\" (UniqueName: \"kubernetes.io/projected/643a2491-9a73-4bbc-b034-cbad833ed05c-kube-api-access-hhfb5\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956344 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643a2491-9a73-4bbc-b034-cbad833ed05c-config\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956365 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/643a2491-9a73-4bbc-b034-cbad833ed05c-serving-cert\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956390 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-client-ca\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956417 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-serving-cert\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956433 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/643a2491-9a73-4bbc-b034-cbad833ed05c-client-ca\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956463 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/643a2491-9a73-4bbc-b034-cbad833ed05c-tmp\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956500 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-proxy-ca-bundles\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956521 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-tmp\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956558 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-config\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956590 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wknch\" (UniqueName: \"kubernetes.io/projected/fef1a991-20ef-447f-94d6-c18c5f875ae8-kube-api-access-wknch\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956602 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef1a991-20ef-447f-94d6-c18c5f875ae8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.956613 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef1a991-20ef-447f-94d6-c18c5f875ae8-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.957260 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/545f05c6-70cf-4ce3-ad50-ad4e264680ca-tmp" (OuterVolumeSpecName: "tmp") pod "545f05c6-70cf-4ce3-ad50-ad4e264680ca" (UID: "545f05c6-70cf-4ce3-ad50-ad4e264680ca"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.957755 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "545f05c6-70cf-4ce3-ad50-ad4e264680ca" (UID: "545f05c6-70cf-4ce3-ad50-ad4e264680ca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.957856 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-config" (OuterVolumeSpecName: "config") pod "545f05c6-70cf-4ce3-ad50-ad4e264680ca" (UID: "545f05c6-70cf-4ce3-ad50-ad4e264680ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.957942 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "545f05c6-70cf-4ce3-ad50-ad4e264680ca" (UID: "545f05c6-70cf-4ce3-ad50-ad4e264680ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.961175 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/545f05c6-70cf-4ce3-ad50-ad4e264680ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "545f05c6-70cf-4ce3-ad50-ad4e264680ca" (UID: "545f05c6-70cf-4ce3-ad50-ad4e264680ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5119]: I0220 00:15:03.961480 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/545f05c6-70cf-4ce3-ad50-ad4e264680ca-kube-api-access-f9zhx" (OuterVolumeSpecName: "kube-api-access-f9zhx") pod "545f05c6-70cf-4ce3-ad50-ad4e264680ca" (UID: "545f05c6-70cf-4ce3-ad50-ad4e264680ca"). InnerVolumeSpecName "kube-api-access-f9zhx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.058263 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8j222\" (UniqueName: \"kubernetes.io/projected/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-kube-api-access-8j222\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.058742 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hhfb5\" (UniqueName: \"kubernetes.io/projected/643a2491-9a73-4bbc-b034-cbad833ed05c-kube-api-access-hhfb5\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.058787 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643a2491-9a73-4bbc-b034-cbad833ed05c-config\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.058830 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/643a2491-9a73-4bbc-b034-cbad833ed05c-serving-cert\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.058920 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-client-ca\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.059987 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-serving-cert\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060032 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/643a2491-9a73-4bbc-b034-cbad833ed05c-client-ca\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060088 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/643a2491-9a73-4bbc-b034-cbad833ed05c-tmp\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060158 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-proxy-ca-bundles\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060419 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-client-ca\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060495 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-tmp\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060565 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-config\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060714 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/545f05c6-70cf-4ce3-ad50-ad4e264680ca-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060731 5119 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060744 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060755 5119 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/545f05c6-70cf-4ce3-ad50-ad4e264680ca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060771 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/545f05c6-70cf-4ce3-ad50-ad4e264680ca-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060784 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f9zhx\" (UniqueName: \"kubernetes.io/projected/545f05c6-70cf-4ce3-ad50-ad4e264680ca-kube-api-access-f9zhx\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.060885 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/643a2491-9a73-4bbc-b034-cbad833ed05c-tmp\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.061027 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/643a2491-9a73-4bbc-b034-cbad833ed05c-client-ca\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.061028 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-tmp\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.062901 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-proxy-ca-bundles\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.063004 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643a2491-9a73-4bbc-b034-cbad833ed05c-config\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.064828 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-serving-cert\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.065030 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-config\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.065992 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/643a2491-9a73-4bbc-b034-cbad833ed05c-serving-cert\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.076835 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j222\" (UniqueName: \"kubernetes.io/projected/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-kube-api-access-8j222\") pod \"controller-manager-748bd4bc57-wswgt\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.077432 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhfb5\" (UniqueName: \"kubernetes.io/projected/643a2491-9a73-4bbc-b034-cbad833ed05c-kube-api-access-hhfb5\") pod \"route-controller-manager-68fcfddc44-9tw6h\" (UID: \"643a2491-9a73-4bbc-b034-cbad833ed05c\") " pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.138832 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.174005 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.302424 5119 generic.go:358] "Generic (PLEG): container finished" podID="545f05c6-70cf-4ce3-ad50-ad4e264680ca" containerID="bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601" exitCode=0 Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.302493 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" event={"ID":"545f05c6-70cf-4ce3-ad50-ad4e264680ca","Type":"ContainerDied","Data":"bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601"} Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.302595 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.302631 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-45lmz" event={"ID":"545f05c6-70cf-4ce3-ad50-ad4e264680ca","Type":"ContainerDied","Data":"a7b7bd473cb193bca80a191efe0e16c9b59b84b74c6172fc91b308af392cfdd0"} Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.302660 5119 scope.go:117] "RemoveContainer" containerID="bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.311822 5119 generic.go:358] "Generic (PLEG): container finished" podID="fef1a991-20ef-447f-94d6-c18c5f875ae8" containerID="f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5" exitCode=0 Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.312072 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" event={"ID":"fef1a991-20ef-447f-94d6-c18c5f875ae8","Type":"ContainerDied","Data":"f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5"} Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.312146 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" event={"ID":"fef1a991-20ef-447f-94d6-c18c5f875ae8","Type":"ContainerDied","Data":"b7009c12a37cb663835cdbea3852e95f5cad150d82d8fe84bd1b86ff3dcedea3"} Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.312299 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.320435 5119 scope.go:117] "RemoveContainer" containerID="bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601" Feb 20 00:15:04 crc kubenswrapper[5119]: E0220 00:15:04.320911 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601\": container with ID starting with bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601 not found: ID does not exist" containerID="bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.320963 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601"} err="failed to get container status \"bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601\": rpc error: code = NotFound desc = could not find container \"bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601\": container with ID starting with bf1b914b35b06971c8a11b79f34938b93153d1eea09186d5b23d72ef8e6c9601 not found: ID does not exist" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.320993 5119 scope.go:117] "RemoveContainer" containerID="f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.342474 5119 scope.go:117] "RemoveContainer" containerID="f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5" Feb 20 00:15:04 crc kubenswrapper[5119]: E0220 00:15:04.344021 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5\": container with ID starting with f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5 not found: ID does not exist" containerID="f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.344071 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5"} err="failed to get container status \"f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5\": rpc error: code = NotFound desc = could not find container \"f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5\": container with ID starting with f221132b191382679667bf9834efdc2cdab354168c5d0c842e532e0fabc326a5 not found: ID does not exist" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.350721 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-45lmz"] Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.360736 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-45lmz"] Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.365802 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr"] Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.370652 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fmgjr"] Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.402622 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h"] Feb 20 00:15:04 crc kubenswrapper[5119]: W0220 00:15:04.406900 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod643a2491_9a73_4bbc_b034_cbad833ed05c.slice/crio-86cb8146a5c38ee145e0df663bec7c9ecffbe74eac260a493726ae33595f08b3 WatchSource:0}: Error finding container 86cb8146a5c38ee145e0df663bec7c9ecffbe74eac260a493726ae33595f08b3: Status 404 returned error can't find the container with id 86cb8146a5c38ee145e0df663bec7c9ecffbe74eac260a493726ae33595f08b3 Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.457207 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748bd4bc57-wswgt"] Feb 20 00:15:04 crc kubenswrapper[5119]: W0220 00:15:04.464498 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6999040a_bf06_4fe7_8326_ee7f5ebb6a0e.slice/crio-5b20bd07d5e0908f714e6c76a02428f60b21f97ee3ee716b41376125ca91dd20 WatchSource:0}: Error finding container 5b20bd07d5e0908f714e6c76a02428f60b21f97ee3ee716b41376125ca91dd20: Status 404 returned error can't find the container with id 5b20bd07d5e0908f714e6c76a02428f60b21f97ee3ee716b41376125ca91dd20 Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.874737 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="545f05c6-70cf-4ce3-ad50-ad4e264680ca" path="/var/lib/kubelet/pods/545f05c6-70cf-4ce3-ad50-ad4e264680ca/volumes" Feb 20 00:15:04 crc kubenswrapper[5119]: I0220 00:15:04.877039 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef1a991-20ef-447f-94d6-c18c5f875ae8" path="/var/lib/kubelet/pods/fef1a991-20ef-447f-94d6-c18c5f875ae8/volumes" Feb 20 00:15:05 crc kubenswrapper[5119]: I0220 00:15:05.322261 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" event={"ID":"643a2491-9a73-4bbc-b034-cbad833ed05c","Type":"ContainerStarted","Data":"d75f1117b668a5f1a498e0b328c34cdcd40eab3133b9f9ad05802cbe3538b5a4"} Feb 20 00:15:05 crc kubenswrapper[5119]: I0220 00:15:05.322752 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" event={"ID":"643a2491-9a73-4bbc-b034-cbad833ed05c","Type":"ContainerStarted","Data":"86cb8146a5c38ee145e0df663bec7c9ecffbe74eac260a493726ae33595f08b3"} Feb 20 00:15:05 crc kubenswrapper[5119]: I0220 00:15:05.322783 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:05 crc kubenswrapper[5119]: I0220 00:15:05.325082 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" event={"ID":"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e","Type":"ContainerStarted","Data":"9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475"} Feb 20 00:15:05 crc kubenswrapper[5119]: I0220 00:15:05.325119 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" event={"ID":"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e","Type":"ContainerStarted","Data":"5b20bd07d5e0908f714e6c76a02428f60b21f97ee3ee716b41376125ca91dd20"} Feb 20 00:15:05 crc kubenswrapper[5119]: I0220 00:15:05.325577 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:05 crc kubenswrapper[5119]: I0220 00:15:05.331106 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:05 crc kubenswrapper[5119]: I0220 00:15:05.336042 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" Feb 20 00:15:05 crc kubenswrapper[5119]: I0220 00:15:05.346823 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68fcfddc44-9tw6h" podStartSLOduration=2.3467991489999998 podStartE2EDuration="2.346799149s" podCreationTimestamp="2026-02-20 00:15:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:15:05.342762207 +0000 UTC m=+287.321726549" watchObservedRunningTime="2026-02-20 00:15:05.346799149 +0000 UTC m=+287.325763441" Feb 20 00:15:13 crc kubenswrapper[5119]: I0220 00:15:13.932104 5119 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 00:15:19 crc kubenswrapper[5119]: I0220 00:15:19.048641 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:15:19 crc kubenswrapper[5119]: I0220 00:15:19.049384 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:15:23 crc kubenswrapper[5119]: I0220 00:15:23.399699 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" podStartSLOduration=20.399661109 podStartE2EDuration="20.399661109s" podCreationTimestamp="2026-02-20 00:15:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:15:05.389135495 +0000 UTC m=+287.368099807" watchObservedRunningTime="2026-02-20 00:15:23.399661109 +0000 UTC m=+305.378625491" Feb 20 00:15:23 crc kubenswrapper[5119]: I0220 00:15:23.408231 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-748bd4bc57-wswgt"] Feb 20 00:15:23 crc kubenswrapper[5119]: I0220 00:15:23.408807 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" podUID="6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" containerName="controller-manager" containerID="cri-o://9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475" gracePeriod=30 Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.164597 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.198674 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-966f7cb9-6rxkt"] Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.199772 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" containerName="controller-manager" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.199807 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" containerName="controller-manager" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.200014 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" containerName="controller-manager" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.210445 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.217222 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-966f7cb9-6rxkt"] Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.253930 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-proxy-ca-bundles\") pod \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.254156 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-client-ca\") pod \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.254197 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-config\") pod \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.254247 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j222\" (UniqueName: \"kubernetes.io/projected/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-kube-api-access-8j222\") pod \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.254305 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-tmp\") pod \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.254359 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-serving-cert\") pod \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\" (UID: \"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e\") " Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.254501 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fdb4a8b8-df39-4411-8a72-8c721b4e5892-proxy-ca-bundles\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.254591 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb4a8b8-df39-4411-8a72-8c721b4e5892-config\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.254629 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdb4a8b8-df39-4411-8a72-8c721b4e5892-serving-cert\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.254679 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fdb4a8b8-df39-4411-8a72-8c721b4e5892-tmp\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.254719 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntghv\" (UniqueName: \"kubernetes.io/projected/fdb4a8b8-df39-4411-8a72-8c721b4e5892-kube-api-access-ntghv\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.254758 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fdb4a8b8-df39-4411-8a72-8c721b4e5892-client-ca\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.255021 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" (UID: "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.255452 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-client-ca" (OuterVolumeSpecName: "client-ca") pod "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" (UID: "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.255665 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-tmp" (OuterVolumeSpecName: "tmp") pod "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" (UID: "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.255673 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-config" (OuterVolumeSpecName: "config") pod "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" (UID: "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.261481 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-kube-api-access-8j222" (OuterVolumeSpecName: "kube-api-access-8j222") pod "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" (UID: "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e"). InnerVolumeSpecName "kube-api-access-8j222". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.261667 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" (UID: "6999040a-bf06-4fe7-8326-ee7f5ebb6a0e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356257 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fdb4a8b8-df39-4411-8a72-8c721b4e5892-proxy-ca-bundles\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356318 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb4a8b8-df39-4411-8a72-8c721b4e5892-config\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356336 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdb4a8b8-df39-4411-8a72-8c721b4e5892-serving-cert\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356354 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fdb4a8b8-df39-4411-8a72-8c721b4e5892-tmp\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356372 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ntghv\" (UniqueName: \"kubernetes.io/projected/fdb4a8b8-df39-4411-8a72-8c721b4e5892-kube-api-access-ntghv\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356393 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fdb4a8b8-df39-4411-8a72-8c721b4e5892-client-ca\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356480 5119 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356492 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356501 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8j222\" (UniqueName: \"kubernetes.io/projected/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-kube-api-access-8j222\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356510 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356517 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.356527 5119 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.357374 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fdb4a8b8-df39-4411-8a72-8c721b4e5892-client-ca\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.357733 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fdb4a8b8-df39-4411-8a72-8c721b4e5892-tmp\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.358530 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fdb4a8b8-df39-4411-8a72-8c721b4e5892-proxy-ca-bundles\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.359199 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb4a8b8-df39-4411-8a72-8c721b4e5892-config\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.362033 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdb4a8b8-df39-4411-8a72-8c721b4e5892-serving-cert\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.378978 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntghv\" (UniqueName: \"kubernetes.io/projected/fdb4a8b8-df39-4411-8a72-8c721b4e5892-kube-api-access-ntghv\") pod \"controller-manager-966f7cb9-6rxkt\" (UID: \"fdb4a8b8-df39-4411-8a72-8c721b4e5892\") " pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.449748 5119 generic.go:358] "Generic (PLEG): container finished" podID="6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" containerID="9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475" exitCode=0 Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.449805 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" event={"ID":"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e","Type":"ContainerDied","Data":"9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475"} Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.449871 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" event={"ID":"6999040a-bf06-4fe7-8326-ee7f5ebb6a0e","Type":"ContainerDied","Data":"5b20bd07d5e0908f714e6c76a02428f60b21f97ee3ee716b41376125ca91dd20"} Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.449896 5119 scope.go:117] "RemoveContainer" containerID="9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.449924 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748bd4bc57-wswgt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.469445 5119 scope.go:117] "RemoveContainer" containerID="9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475" Feb 20 00:15:24 crc kubenswrapper[5119]: E0220 00:15:24.472805 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475\": container with ID starting with 9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475 not found: ID does not exist" containerID="9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.472883 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475"} err="failed to get container status \"9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475\": rpc error: code = NotFound desc = could not find container \"9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475\": container with ID starting with 9c8ddd5748e56f2fe082e002c9c19876def9dbb526b96929aee494080663f475 not found: ID does not exist" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.490265 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-748bd4bc57-wswgt"] Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.496308 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-748bd4bc57-wswgt"] Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.529172 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.795655 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-966f7cb9-6rxkt"] Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.814523 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 00:15:24 crc kubenswrapper[5119]: I0220 00:15:24.886563 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6999040a-bf06-4fe7-8326-ee7f5ebb6a0e" path="/var/lib/kubelet/pods/6999040a-bf06-4fe7-8326-ee7f5ebb6a0e/volumes" Feb 20 00:15:25 crc kubenswrapper[5119]: I0220 00:15:25.458525 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" event={"ID":"fdb4a8b8-df39-4411-8a72-8c721b4e5892","Type":"ContainerStarted","Data":"4852f9db98e3043b3b624f96748f1eff51295b3048325b72befc85abdc9b4a5a"} Feb 20 00:15:25 crc kubenswrapper[5119]: I0220 00:15:25.459002 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" event={"ID":"fdb4a8b8-df39-4411-8a72-8c721b4e5892","Type":"ContainerStarted","Data":"061074ed7e233df9ae8ef0c13b4dfb945385158e45b1d33b7f0637db12b9d270"} Feb 20 00:15:25 crc kubenswrapper[5119]: I0220 00:15:25.459020 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:15:25 crc kubenswrapper[5119]: I0220 00:15:25.492663 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" podStartSLOduration=2.492638345 podStartE2EDuration="2.492638345s" podCreationTimestamp="2026-02-20 00:15:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:15:25.48524224 +0000 UTC m=+307.464206602" watchObservedRunningTime="2026-02-20 00:15:25.492638345 +0000 UTC m=+307.471602667" Feb 20 00:15:25 crc kubenswrapper[5119]: I0220 00:15:25.795211 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-966f7cb9-6rxkt" Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.900006 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8ll66"] Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.902830 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8ll66" podUID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" containerName="registry-server" containerID="cri-o://07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce" gracePeriod=30 Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.924890 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z2qsm"] Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.926356 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z2qsm" podUID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" containerName="registry-server" containerID="cri-o://63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af" gracePeriod=30 Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.937814 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-gx2g8"] Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.938409 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" podUID="cd98dcc8-4803-40df-a2d1-88f48b1e14b1" containerName="marketplace-operator" containerID="cri-o://760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be" gracePeriod=30 Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.949437 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rprkl"] Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.949928 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rprkl" podUID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" containerName="registry-server" containerID="cri-o://99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9" gracePeriod=30 Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.961641 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gr894"] Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.962101 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gr894" podUID="fdd71310-68b1-4580-8ea5-053669823d3c" containerName="registry-server" containerID="cri-o://20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8" gracePeriod=30 Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.973611 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-htf8j"] Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.984217 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-htf8j"] Feb 20 00:16:26 crc kubenswrapper[5119]: I0220 00:16:26.984425 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.137381 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b64adc58-6ed6-41c9-95bd-535f34890377-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.137977 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b64adc58-6ed6-41c9-95bd-535f34890377-tmp\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.138080 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b64adc58-6ed6-41c9-95bd-535f34890377-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.138116 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzxk8\" (UniqueName: \"kubernetes.io/projected/b64adc58-6ed6-41c9-95bd-535f34890377-kube-api-access-rzxk8\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.239558 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b64adc58-6ed6-41c9-95bd-535f34890377-tmp\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.239618 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b64adc58-6ed6-41c9-95bd-535f34890377-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.239663 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rzxk8\" (UniqueName: \"kubernetes.io/projected/b64adc58-6ed6-41c9-95bd-535f34890377-kube-api-access-rzxk8\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.239748 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b64adc58-6ed6-41c9-95bd-535f34890377-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.240933 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b64adc58-6ed6-41c9-95bd-535f34890377-tmp\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.243174 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b64adc58-6ed6-41c9-95bd-535f34890377-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.249635 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b64adc58-6ed6-41c9-95bd-535f34890377-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.263029 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzxk8\" (UniqueName: \"kubernetes.io/projected/b64adc58-6ed6-41c9-95bd-535f34890377-kube-api-access-rzxk8\") pod \"marketplace-operator-547dbd544d-htf8j\" (UID: \"b64adc58-6ed6-41c9-95bd-535f34890377\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.405804 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.420467 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.437643 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.448070 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.474888 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.544700 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-catalog-content\") pod \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.544767 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-trusted-ca\") pod \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.544819 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2n5n\" (UniqueName: \"kubernetes.io/projected/d1da415e-215f-4b73-b5b5-36a8c7e68fda-kube-api-access-j2n5n\") pod \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.544895 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-utilities\") pod \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\" (UID: \"d1da415e-215f-4b73-b5b5-36a8c7e68fda\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.544946 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-catalog-content\") pod \"fdd71310-68b1-4580-8ea5-053669823d3c\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.545017 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42l4k\" (UniqueName: \"kubernetes.io/projected/fdd71310-68b1-4580-8ea5-053669823d3c-kube-api-access-42l4k\") pod \"fdd71310-68b1-4580-8ea5-053669823d3c\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.545053 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-tmp\") pod \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.545126 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc2ns\" (UniqueName: \"kubernetes.io/projected/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-kube-api-access-rc2ns\") pod \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.545180 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-utilities\") pod \"fdd71310-68b1-4580-8ea5-053669823d3c\" (UID: \"fdd71310-68b1-4580-8ea5-053669823d3c\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.545212 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-operator-metrics\") pod \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\" (UID: \"cd98dcc8-4803-40df-a2d1-88f48b1e14b1\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.545786 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-tmp" (OuterVolumeSpecName: "tmp") pod "cd98dcc8-4803-40df-a2d1-88f48b1e14b1" (UID: "cd98dcc8-4803-40df-a2d1-88f48b1e14b1"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.547329 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-utilities" (OuterVolumeSpecName: "utilities") pod "fdd71310-68b1-4580-8ea5-053669823d3c" (UID: "fdd71310-68b1-4580-8ea5-053669823d3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.549145 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-utilities" (OuterVolumeSpecName: "utilities") pod "d1da415e-215f-4b73-b5b5-36a8c7e68fda" (UID: "d1da415e-215f-4b73-b5b5-36a8c7e68fda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.549808 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-kube-api-access-rc2ns" (OuterVolumeSpecName: "kube-api-access-rc2ns") pod "cd98dcc8-4803-40df-a2d1-88f48b1e14b1" (UID: "cd98dcc8-4803-40df-a2d1-88f48b1e14b1"). InnerVolumeSpecName "kube-api-access-rc2ns". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.549927 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1da415e-215f-4b73-b5b5-36a8c7e68fda-kube-api-access-j2n5n" (OuterVolumeSpecName: "kube-api-access-j2n5n") pod "d1da415e-215f-4b73-b5b5-36a8c7e68fda" (UID: "d1da415e-215f-4b73-b5b5-36a8c7e68fda"). InnerVolumeSpecName "kube-api-access-j2n5n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.549723 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd71310-68b1-4580-8ea5-053669823d3c-kube-api-access-42l4k" (OuterVolumeSpecName: "kube-api-access-42l4k") pod "fdd71310-68b1-4580-8ea5-053669823d3c" (UID: "fdd71310-68b1-4580-8ea5-053669823d3c"). InnerVolumeSpecName "kube-api-access-42l4k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.550052 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "cd98dcc8-4803-40df-a2d1-88f48b1e14b1" (UID: "cd98dcc8-4803-40df-a2d1-88f48b1e14b1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.553367 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-42l4k\" (UniqueName: \"kubernetes.io/projected/fdd71310-68b1-4580-8ea5-053669823d3c-kube-api-access-42l4k\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.553430 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.553447 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rc2ns\" (UniqueName: \"kubernetes.io/projected/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-kube-api-access-rc2ns\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.553461 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.553475 5119 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.553486 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j2n5n\" (UniqueName: \"kubernetes.io/projected/d1da415e-215f-4b73-b5b5-36a8c7e68fda-kube-api-access-j2n5n\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.553500 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.554192 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.561176 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "cd98dcc8-4803-40df-a2d1-88f48b1e14b1" (UID: "cd98dcc8-4803-40df-a2d1-88f48b1e14b1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.655006 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-catalog-content\") pod \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.655075 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-utilities\") pod \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.655128 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fpdk\" (UniqueName: \"kubernetes.io/projected/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-kube-api-access-7fpdk\") pod \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.655231 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-catalog-content\") pod \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.655305 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-utilities\") pod \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\" (UID: \"14397656-dc1d-4bf5-a1f2-e9b79fab3e53\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.655329 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqzpb\" (UniqueName: \"kubernetes.io/projected/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-kube-api-access-hqzpb\") pod \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\" (UID: \"7a6456d4-f2fc-4c32-82bf-9c58cfa87699\") " Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.655504 5119 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd98dcc8-4803-40df-a2d1-88f48b1e14b1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.656190 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-utilities" (OuterVolumeSpecName: "utilities") pod "7a6456d4-f2fc-4c32-82bf-9c58cfa87699" (UID: "7a6456d4-f2fc-4c32-82bf-9c58cfa87699"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.658824 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-kube-api-access-hqzpb" (OuterVolumeSpecName: "kube-api-access-hqzpb") pod "7a6456d4-f2fc-4c32-82bf-9c58cfa87699" (UID: "7a6456d4-f2fc-4c32-82bf-9c58cfa87699"). InnerVolumeSpecName "kube-api-access-hqzpb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.673198 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1da415e-215f-4b73-b5b5-36a8c7e68fda" (UID: "d1da415e-215f-4b73-b5b5-36a8c7e68fda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.674026 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-utilities" (OuterVolumeSpecName: "utilities") pod "14397656-dc1d-4bf5-a1f2-e9b79fab3e53" (UID: "14397656-dc1d-4bf5-a1f2-e9b79fab3e53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.676942 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-kube-api-access-7fpdk" (OuterVolumeSpecName: "kube-api-access-7fpdk") pod "14397656-dc1d-4bf5-a1f2-e9b79fab3e53" (UID: "14397656-dc1d-4bf5-a1f2-e9b79fab3e53"). InnerVolumeSpecName "kube-api-access-7fpdk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.693196 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdd71310-68b1-4580-8ea5-053669823d3c" (UID: "fdd71310-68b1-4580-8ea5-053669823d3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.693771 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14397656-dc1d-4bf5-a1f2-e9b79fab3e53" (UID: "14397656-dc1d-4bf5-a1f2-e9b79fab3e53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.700303 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a6456d4-f2fc-4c32-82bf-9c58cfa87699" (UID: "7a6456d4-f2fc-4c32-82bf-9c58cfa87699"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.757100 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1da415e-215f-4b73-b5b5-36a8c7e68fda-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.757146 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.757161 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd71310-68b1-4580-8ea5-053669823d3c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.757176 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.757189 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hqzpb\" (UniqueName: \"kubernetes.io/projected/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-kube-api-access-hqzpb\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.757207 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.757221 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6456d4-f2fc-4c32-82bf-9c58cfa87699-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.757235 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7fpdk\" (UniqueName: \"kubernetes.io/projected/14397656-dc1d-4bf5-a1f2-e9b79fab3e53-kube-api-access-7fpdk\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.881808 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-htf8j"] Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.884193 5119 generic.go:358] "Generic (PLEG): container finished" podID="fdd71310-68b1-4580-8ea5-053669823d3c" containerID="20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8" exitCode=0 Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.884296 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr894" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.884327 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr894" event={"ID":"fdd71310-68b1-4580-8ea5-053669823d3c","Type":"ContainerDied","Data":"20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8"} Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.884360 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr894" event={"ID":"fdd71310-68b1-4580-8ea5-053669823d3c","Type":"ContainerDied","Data":"610488e946b5f11ce0c5cff2a8079b2c9392893691db51ed0d92fbf416b29265"} Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.884381 5119 scope.go:117] "RemoveContainer" containerID="20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8" Feb 20 00:16:27 crc kubenswrapper[5119]: W0220 00:16:27.890644 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb64adc58_6ed6_41c9_95bd_535f34890377.slice/crio-24a6389ce657b07f025c3a94ac767297e570588bad2c34056869b3a276ea5f3b WatchSource:0}: Error finding container 24a6389ce657b07f025c3a94ac767297e570588bad2c34056869b3a276ea5f3b: Status 404 returned error can't find the container with id 24a6389ce657b07f025c3a94ac767297e570588bad2c34056869b3a276ea5f3b Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.892367 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.892373 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" event={"ID":"cd98dcc8-4803-40df-a2d1-88f48b1e14b1","Type":"ContainerDied","Data":"760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be"} Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.892356 5119 generic.go:358] "Generic (PLEG): container finished" podID="cd98dcc8-4803-40df-a2d1-88f48b1e14b1" containerID="760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be" exitCode=0 Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.892421 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-gx2g8" event={"ID":"cd98dcc8-4803-40df-a2d1-88f48b1e14b1","Type":"ContainerDied","Data":"dea992bc73d22cb1a34194ab70354b0b51c051cfd1a5d5cf7340cce16f30af3c"} Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.899241 5119 generic.go:358] "Generic (PLEG): container finished" podID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" containerID="63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af" exitCode=0 Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.899452 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2qsm" event={"ID":"d1da415e-215f-4b73-b5b5-36a8c7e68fda","Type":"ContainerDied","Data":"63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af"} Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.899491 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2qsm" event={"ID":"d1da415e-215f-4b73-b5b5-36a8c7e68fda","Type":"ContainerDied","Data":"4b61b0c49f0ac43b6616d7360813d0df846204c4a37baaf3d3a5f936fe37e389"} Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.899499 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2qsm" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.902587 5119 generic.go:358] "Generic (PLEG): container finished" podID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" containerID="07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce" exitCode=0 Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.903175 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ll66" event={"ID":"7a6456d4-f2fc-4c32-82bf-9c58cfa87699","Type":"ContainerDied","Data":"07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce"} Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.903209 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ll66" event={"ID":"7a6456d4-f2fc-4c32-82bf-9c58cfa87699","Type":"ContainerDied","Data":"15871269f18ead6ce2f1f717e072e931d402e18a06ae158f7d4ff4cc13281cbe"} Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.903328 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ll66" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.908910 5119 generic.go:358] "Generic (PLEG): container finished" podID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" containerID="99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9" exitCode=0 Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.908979 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rprkl" event={"ID":"14397656-dc1d-4bf5-a1f2-e9b79fab3e53","Type":"ContainerDied","Data":"99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9"} Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.908994 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rprkl" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.909024 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rprkl" event={"ID":"14397656-dc1d-4bf5-a1f2-e9b79fab3e53","Type":"ContainerDied","Data":"620d10ec41ff9ae066b4faea8b611af500a3e0a517d8e38b5f25c86ef780e4a5"} Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.918500 5119 scope.go:117] "RemoveContainer" containerID="4f4608e6294a1fb93ff27942c3d23da7aac5fd9a4c2286759f53825aa1fde169" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.933727 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gr894"] Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.939473 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gr894"] Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.951933 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rprkl"] Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.969334 5119 scope.go:117] "RemoveContainer" containerID="dd9f7793677949b5a329e7142c48208817d1d4fd303eb55b3384eb40d8bc328b" Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.974380 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rprkl"] Feb 20 00:16:27 crc kubenswrapper[5119]: I0220 00:16:27.985358 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z2qsm"] Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.002470 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z2qsm"] Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.005888 5119 scope.go:117] "RemoveContainer" containerID="20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.006100 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8ll66"] Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.006650 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8\": container with ID starting with 20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8 not found: ID does not exist" containerID="20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.006698 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8"} err="failed to get container status \"20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8\": rpc error: code = NotFound desc = could not find container \"20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8\": container with ID starting with 20d6331e10cf6ada193a99f4fa3c3a42ba8fa5b393ba840ba3a79faba7aa12d8 not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.006735 5119 scope.go:117] "RemoveContainer" containerID="4f4608e6294a1fb93ff27942c3d23da7aac5fd9a4c2286759f53825aa1fde169" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.007186 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f4608e6294a1fb93ff27942c3d23da7aac5fd9a4c2286759f53825aa1fde169\": container with ID starting with 4f4608e6294a1fb93ff27942c3d23da7aac5fd9a4c2286759f53825aa1fde169 not found: ID does not exist" containerID="4f4608e6294a1fb93ff27942c3d23da7aac5fd9a4c2286759f53825aa1fde169" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.007235 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f4608e6294a1fb93ff27942c3d23da7aac5fd9a4c2286759f53825aa1fde169"} err="failed to get container status \"4f4608e6294a1fb93ff27942c3d23da7aac5fd9a4c2286759f53825aa1fde169\": rpc error: code = NotFound desc = could not find container \"4f4608e6294a1fb93ff27942c3d23da7aac5fd9a4c2286759f53825aa1fde169\": container with ID starting with 4f4608e6294a1fb93ff27942c3d23da7aac5fd9a4c2286759f53825aa1fde169 not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.007264 5119 scope.go:117] "RemoveContainer" containerID="dd9f7793677949b5a329e7142c48208817d1d4fd303eb55b3384eb40d8bc328b" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.007668 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd9f7793677949b5a329e7142c48208817d1d4fd303eb55b3384eb40d8bc328b\": container with ID starting with dd9f7793677949b5a329e7142c48208817d1d4fd303eb55b3384eb40d8bc328b not found: ID does not exist" containerID="dd9f7793677949b5a329e7142c48208817d1d4fd303eb55b3384eb40d8bc328b" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.007703 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd9f7793677949b5a329e7142c48208817d1d4fd303eb55b3384eb40d8bc328b"} err="failed to get container status \"dd9f7793677949b5a329e7142c48208817d1d4fd303eb55b3384eb40d8bc328b\": rpc error: code = NotFound desc = could not find container \"dd9f7793677949b5a329e7142c48208817d1d4fd303eb55b3384eb40d8bc328b\": container with ID starting with dd9f7793677949b5a329e7142c48208817d1d4fd303eb55b3384eb40d8bc328b not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.007724 5119 scope.go:117] "RemoveContainer" containerID="760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.010991 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8ll66"] Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.018615 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-gx2g8"] Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.022398 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-gx2g8"] Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.025858 5119 scope.go:117] "RemoveContainer" containerID="760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.026330 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be\": container with ID starting with 760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be not found: ID does not exist" containerID="760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.026380 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be"} err="failed to get container status \"760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be\": rpc error: code = NotFound desc = could not find container \"760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be\": container with ID starting with 760256b4fbe96770d8cda15d1d2889d7eac198398e48564847714ae61ad9c6be not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.026412 5119 scope.go:117] "RemoveContainer" containerID="63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.051192 5119 scope.go:117] "RemoveContainer" containerID="ec8d0370e606af964039c90c9a4cb0455a10ec9f26109c8f9839d196b2da911f" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.083558 5119 scope.go:117] "RemoveContainer" containerID="0001f7f748f11423cee3d39b4042d8523499cd575e40befff668db74f1fc5630" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.140721 5119 scope.go:117] "RemoveContainer" containerID="63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.141391 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af\": container with ID starting with 63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af not found: ID does not exist" containerID="63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.141438 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af"} err="failed to get container status \"63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af\": rpc error: code = NotFound desc = could not find container \"63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af\": container with ID starting with 63a71f6e336d029fb4a7e22d7de5ae6130733f4b21ccee1801400214d06898af not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.141468 5119 scope.go:117] "RemoveContainer" containerID="ec8d0370e606af964039c90c9a4cb0455a10ec9f26109c8f9839d196b2da911f" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.141928 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec8d0370e606af964039c90c9a4cb0455a10ec9f26109c8f9839d196b2da911f\": container with ID starting with ec8d0370e606af964039c90c9a4cb0455a10ec9f26109c8f9839d196b2da911f not found: ID does not exist" containerID="ec8d0370e606af964039c90c9a4cb0455a10ec9f26109c8f9839d196b2da911f" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.141986 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8d0370e606af964039c90c9a4cb0455a10ec9f26109c8f9839d196b2da911f"} err="failed to get container status \"ec8d0370e606af964039c90c9a4cb0455a10ec9f26109c8f9839d196b2da911f\": rpc error: code = NotFound desc = could not find container \"ec8d0370e606af964039c90c9a4cb0455a10ec9f26109c8f9839d196b2da911f\": container with ID starting with ec8d0370e606af964039c90c9a4cb0455a10ec9f26109c8f9839d196b2da911f not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.142012 5119 scope.go:117] "RemoveContainer" containerID="0001f7f748f11423cee3d39b4042d8523499cd575e40befff668db74f1fc5630" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.142262 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0001f7f748f11423cee3d39b4042d8523499cd575e40befff668db74f1fc5630\": container with ID starting with 0001f7f748f11423cee3d39b4042d8523499cd575e40befff668db74f1fc5630 not found: ID does not exist" containerID="0001f7f748f11423cee3d39b4042d8523499cd575e40befff668db74f1fc5630" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.142284 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0001f7f748f11423cee3d39b4042d8523499cd575e40befff668db74f1fc5630"} err="failed to get container status \"0001f7f748f11423cee3d39b4042d8523499cd575e40befff668db74f1fc5630\": rpc error: code = NotFound desc = could not find container \"0001f7f748f11423cee3d39b4042d8523499cd575e40befff668db74f1fc5630\": container with ID starting with 0001f7f748f11423cee3d39b4042d8523499cd575e40befff668db74f1fc5630 not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.142297 5119 scope.go:117] "RemoveContainer" containerID="07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.168396 5119 scope.go:117] "RemoveContainer" containerID="3a046e23e58d6d71bb493c445b2055b9dfa569bc186cfa52db7466e528bb705b" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.188008 5119 scope.go:117] "RemoveContainer" containerID="ff96dbb5c23f564c7100fb44d6ea14d6bf6be5b9d43b0f29dbea49efb03a6273" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.205526 5119 scope.go:117] "RemoveContainer" containerID="07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.206016 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce\": container with ID starting with 07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce not found: ID does not exist" containerID="07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.206050 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce"} err="failed to get container status \"07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce\": rpc error: code = NotFound desc = could not find container \"07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce\": container with ID starting with 07aec39cd2bacbdcca785ccd3d0a11155442fcd9db02bf16ce9723428a3a73ce not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.206073 5119 scope.go:117] "RemoveContainer" containerID="3a046e23e58d6d71bb493c445b2055b9dfa569bc186cfa52db7466e528bb705b" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.206294 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a046e23e58d6d71bb493c445b2055b9dfa569bc186cfa52db7466e528bb705b\": container with ID starting with 3a046e23e58d6d71bb493c445b2055b9dfa569bc186cfa52db7466e528bb705b not found: ID does not exist" containerID="3a046e23e58d6d71bb493c445b2055b9dfa569bc186cfa52db7466e528bb705b" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.206315 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a046e23e58d6d71bb493c445b2055b9dfa569bc186cfa52db7466e528bb705b"} err="failed to get container status \"3a046e23e58d6d71bb493c445b2055b9dfa569bc186cfa52db7466e528bb705b\": rpc error: code = NotFound desc = could not find container \"3a046e23e58d6d71bb493c445b2055b9dfa569bc186cfa52db7466e528bb705b\": container with ID starting with 3a046e23e58d6d71bb493c445b2055b9dfa569bc186cfa52db7466e528bb705b not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.206327 5119 scope.go:117] "RemoveContainer" containerID="ff96dbb5c23f564c7100fb44d6ea14d6bf6be5b9d43b0f29dbea49efb03a6273" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.206518 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff96dbb5c23f564c7100fb44d6ea14d6bf6be5b9d43b0f29dbea49efb03a6273\": container with ID starting with ff96dbb5c23f564c7100fb44d6ea14d6bf6be5b9d43b0f29dbea49efb03a6273 not found: ID does not exist" containerID="ff96dbb5c23f564c7100fb44d6ea14d6bf6be5b9d43b0f29dbea49efb03a6273" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.206587 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff96dbb5c23f564c7100fb44d6ea14d6bf6be5b9d43b0f29dbea49efb03a6273"} err="failed to get container status \"ff96dbb5c23f564c7100fb44d6ea14d6bf6be5b9d43b0f29dbea49efb03a6273\": rpc error: code = NotFound desc = could not find container \"ff96dbb5c23f564c7100fb44d6ea14d6bf6be5b9d43b0f29dbea49efb03a6273\": container with ID starting with ff96dbb5c23f564c7100fb44d6ea14d6bf6be5b9d43b0f29dbea49efb03a6273 not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.206600 5119 scope.go:117] "RemoveContainer" containerID="99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.229218 5119 scope.go:117] "RemoveContainer" containerID="02aed272c44710ef57f5af5081fc3a62692b94c5da44d1fa21503959ce5be9e2" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.242576 5119 scope.go:117] "RemoveContainer" containerID="503d8075599f7637f248bdf3649f4cc9be53dd854aae0d9ecb47299b69c3d980" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.267572 5119 scope.go:117] "RemoveContainer" containerID="99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.268987 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9\": container with ID starting with 99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9 not found: ID does not exist" containerID="99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.269032 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9"} err="failed to get container status \"99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9\": rpc error: code = NotFound desc = could not find container \"99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9\": container with ID starting with 99dcf975a0cdef0145a8ac6d7b3158265540b340e3fe6bd87ecd699ae42d5cd9 not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.269064 5119 scope.go:117] "RemoveContainer" containerID="02aed272c44710ef57f5af5081fc3a62692b94c5da44d1fa21503959ce5be9e2" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.269585 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02aed272c44710ef57f5af5081fc3a62692b94c5da44d1fa21503959ce5be9e2\": container with ID starting with 02aed272c44710ef57f5af5081fc3a62692b94c5da44d1fa21503959ce5be9e2 not found: ID does not exist" containerID="02aed272c44710ef57f5af5081fc3a62692b94c5da44d1fa21503959ce5be9e2" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.269607 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02aed272c44710ef57f5af5081fc3a62692b94c5da44d1fa21503959ce5be9e2"} err="failed to get container status \"02aed272c44710ef57f5af5081fc3a62692b94c5da44d1fa21503959ce5be9e2\": rpc error: code = NotFound desc = could not find container \"02aed272c44710ef57f5af5081fc3a62692b94c5da44d1fa21503959ce5be9e2\": container with ID starting with 02aed272c44710ef57f5af5081fc3a62692b94c5da44d1fa21503959ce5be9e2 not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.269619 5119 scope.go:117] "RemoveContainer" containerID="503d8075599f7637f248bdf3649f4cc9be53dd854aae0d9ecb47299b69c3d980" Feb 20 00:16:28 crc kubenswrapper[5119]: E0220 00:16:28.269892 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"503d8075599f7637f248bdf3649f4cc9be53dd854aae0d9ecb47299b69c3d980\": container with ID starting with 503d8075599f7637f248bdf3649f4cc9be53dd854aae0d9ecb47299b69c3d980 not found: ID does not exist" containerID="503d8075599f7637f248bdf3649f4cc9be53dd854aae0d9ecb47299b69c3d980" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.269928 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"503d8075599f7637f248bdf3649f4cc9be53dd854aae0d9ecb47299b69c3d980"} err="failed to get container status \"503d8075599f7637f248bdf3649f4cc9be53dd854aae0d9ecb47299b69c3d980\": rpc error: code = NotFound desc = could not find container \"503d8075599f7637f248bdf3649f4cc9be53dd854aae0d9ecb47299b69c3d980\": container with ID starting with 503d8075599f7637f248bdf3649f4cc9be53dd854aae0d9ecb47299b69c3d980 not found: ID does not exist" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.865263 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" path="/var/lib/kubelet/pods/14397656-dc1d-4bf5-a1f2-e9b79fab3e53/volumes" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.865909 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" path="/var/lib/kubelet/pods/7a6456d4-f2fc-4c32-82bf-9c58cfa87699/volumes" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.866701 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd98dcc8-4803-40df-a2d1-88f48b1e14b1" path="/var/lib/kubelet/pods/cd98dcc8-4803-40df-a2d1-88f48b1e14b1/volumes" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.867657 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" path="/var/lib/kubelet/pods/d1da415e-215f-4b73-b5b5-36a8c7e68fda/volumes" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.868200 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd71310-68b1-4580-8ea5-053669823d3c" path="/var/lib/kubelet/pods/fdd71310-68b1-4580-8ea5-053669823d3c/volumes" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.919821 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" event={"ID":"b64adc58-6ed6-41c9-95bd-535f34890377","Type":"ContainerStarted","Data":"1b61e095f842648eaf3bbee3c971ef01d45b77fb365d1c975a4e13a78d3fc443"} Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.919878 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" event={"ID":"b64adc58-6ed6-41c9-95bd-535f34890377","Type":"ContainerStarted","Data":"24a6389ce657b07f025c3a94ac767297e570588bad2c34056869b3a276ea5f3b"} Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.920224 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.923473 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" Feb 20 00:16:28 crc kubenswrapper[5119]: I0220 00:16:28.944576 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-htf8j" podStartSLOduration=2.944560993 podStartE2EDuration="2.944560993s" podCreationTimestamp="2026-02-20 00:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:16:28.941939849 +0000 UTC m=+370.920904171" watchObservedRunningTime="2026-02-20 00:16:28.944560993 +0000 UTC m=+370.923525295" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.120111 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8cz9b"] Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121693 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121713 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121728 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121736 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121748 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fdd71310-68b1-4580-8ea5-053669823d3c" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121755 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd71310-68b1-4580-8ea5-053669823d3c" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121770 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cd98dcc8-4803-40df-a2d1-88f48b1e14b1" containerName="marketplace-operator" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121777 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd98dcc8-4803-40df-a2d1-88f48b1e14b1" containerName="marketplace-operator" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121787 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fdd71310-68b1-4580-8ea5-053669823d3c" containerName="extract-content" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121793 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd71310-68b1-4580-8ea5-053669823d3c" containerName="extract-content" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121804 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" containerName="extract-utilities" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121810 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" containerName="extract-utilities" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121819 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" containerName="extract-content" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121852 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" containerName="extract-content" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121862 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" containerName="extract-utilities" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121870 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" containerName="extract-utilities" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121883 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" containerName="extract-utilities" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121890 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" containerName="extract-utilities" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121901 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" containerName="extract-content" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121909 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" containerName="extract-content" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121918 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fdd71310-68b1-4580-8ea5-053669823d3c" containerName="extract-utilities" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121924 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd71310-68b1-4580-8ea5-053669823d3c" containerName="extract-utilities" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121939 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" containerName="extract-content" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121946 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" containerName="extract-content" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121955 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.121961 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.122055 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="fdd71310-68b1-4580-8ea5-053669823d3c" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.122069 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="d1da415e-215f-4b73-b5b5-36a8c7e68fda" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.122079 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="7a6456d4-f2fc-4c32-82bf-9c58cfa87699" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.122087 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="14397656-dc1d-4bf5-a1f2-e9b79fab3e53" containerName="registry-server" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.122096 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="cd98dcc8-4803-40df-a2d1-88f48b1e14b1" containerName="marketplace-operator" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.126956 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.129712 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.131565 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cz9b"] Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.200909 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-utilities\") pod \"redhat-marketplace-8cz9b\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.200993 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdlkb\" (UniqueName: \"kubernetes.io/projected/b2314a6d-2b65-453e-8296-608f8e488ff4-kube-api-access-rdlkb\") pod \"redhat-marketplace-8cz9b\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.201206 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-catalog-content\") pod \"redhat-marketplace-8cz9b\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.302972 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rdlkb\" (UniqueName: \"kubernetes.io/projected/b2314a6d-2b65-453e-8296-608f8e488ff4-kube-api-access-rdlkb\") pod \"redhat-marketplace-8cz9b\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.303155 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-catalog-content\") pod \"redhat-marketplace-8cz9b\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.303295 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-utilities\") pod \"redhat-marketplace-8cz9b\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.304321 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-catalog-content\") pod \"redhat-marketplace-8cz9b\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.304931 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-utilities\") pod \"redhat-marketplace-8cz9b\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.314608 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5nb2n"] Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.326126 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.331831 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nb2n"] Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.331909 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.339914 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdlkb\" (UniqueName: \"kubernetes.io/projected/b2314a6d-2b65-453e-8296-608f8e488ff4-kube-api-access-rdlkb\") pod \"redhat-marketplace-8cz9b\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.404400 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqf84\" (UniqueName: \"kubernetes.io/projected/11050770-13c5-418e-b5fc-cc1bec3dc51e-kube-api-access-kqf84\") pod \"redhat-operators-5nb2n\" (UID: \"11050770-13c5-418e-b5fc-cc1bec3dc51e\") " pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.404463 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11050770-13c5-418e-b5fc-cc1bec3dc51e-utilities\") pod \"redhat-operators-5nb2n\" (UID: \"11050770-13c5-418e-b5fc-cc1bec3dc51e\") " pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.404719 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11050770-13c5-418e-b5fc-cc1bec3dc51e-catalog-content\") pod \"redhat-operators-5nb2n\" (UID: \"11050770-13c5-418e-b5fc-cc1bec3dc51e\") " pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.454440 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.506862 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11050770-13c5-418e-b5fc-cc1bec3dc51e-catalog-content\") pod \"redhat-operators-5nb2n\" (UID: \"11050770-13c5-418e-b5fc-cc1bec3dc51e\") " pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.507148 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kqf84\" (UniqueName: \"kubernetes.io/projected/11050770-13c5-418e-b5fc-cc1bec3dc51e-kube-api-access-kqf84\") pod \"redhat-operators-5nb2n\" (UID: \"11050770-13c5-418e-b5fc-cc1bec3dc51e\") " pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.507219 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11050770-13c5-418e-b5fc-cc1bec3dc51e-utilities\") pod \"redhat-operators-5nb2n\" (UID: \"11050770-13c5-418e-b5fc-cc1bec3dc51e\") " pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.508894 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11050770-13c5-418e-b5fc-cc1bec3dc51e-catalog-content\") pod \"redhat-operators-5nb2n\" (UID: \"11050770-13c5-418e-b5fc-cc1bec3dc51e\") " pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.509097 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11050770-13c5-418e-b5fc-cc1bec3dc51e-utilities\") pod \"redhat-operators-5nb2n\" (UID: \"11050770-13c5-418e-b5fc-cc1bec3dc51e\") " pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.545738 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqf84\" (UniqueName: \"kubernetes.io/projected/11050770-13c5-418e-b5fc-cc1bec3dc51e-kube-api-access-kqf84\") pod \"redhat-operators-5nb2n\" (UID: \"11050770-13c5-418e-b5fc-cc1bec3dc51e\") " pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.675361 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.872687 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cz9b"] Feb 20 00:16:30 crc kubenswrapper[5119]: W0220 00:16:30.876451 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2314a6d_2b65_453e_8296_608f8e488ff4.slice/crio-b2c8e43435da37375915d2bd96e4cc56e2fde695ab1e83c91165076e87a3911b WatchSource:0}: Error finding container b2c8e43435da37375915d2bd96e4cc56e2fde695ab1e83c91165076e87a3911b: Status 404 returned error can't find the container with id b2c8e43435da37375915d2bd96e4cc56e2fde695ab1e83c91165076e87a3911b Feb 20 00:16:30 crc kubenswrapper[5119]: I0220 00:16:30.935714 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cz9b" event={"ID":"b2314a6d-2b65-453e-8296-608f8e488ff4","Type":"ContainerStarted","Data":"b2c8e43435da37375915d2bd96e4cc56e2fde695ab1e83c91165076e87a3911b"} Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.087331 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nb2n"] Feb 20 00:16:31 crc kubenswrapper[5119]: W0220 00:16:31.146311 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11050770_13c5_418e_b5fc_cc1bec3dc51e.slice/crio-904a632fbd173b4be3e388a231b051b21eea2420fcb7c82d4581900f5bdb3ee3 WatchSource:0}: Error finding container 904a632fbd173b4be3e388a231b051b21eea2420fcb7c82d4581900f5bdb3ee3: Status 404 returned error can't find the container with id 904a632fbd173b4be3e388a231b051b21eea2420fcb7c82d4581900f5bdb3ee3 Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.371643 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rdmpz"] Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.376289 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.384948 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rdmpz"] Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.528630 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8dcddb1b-6dac-420a-9478-7dd29fd6177d-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.528680 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8dcddb1b-6dac-420a-9478-7dd29fd6177d-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.528770 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dcddb1b-6dac-420a-9478-7dd29fd6177d-trusted-ca\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.528796 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8dcddb1b-6dac-420a-9478-7dd29fd6177d-registry-certificates\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.528835 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8dcddb1b-6dac-420a-9478-7dd29fd6177d-registry-tls\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.528851 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8dcddb1b-6dac-420a-9478-7dd29fd6177d-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.528866 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8k66\" (UniqueName: \"kubernetes.io/projected/8dcddb1b-6dac-420a-9478-7dd29fd6177d-kube-api-access-b8k66\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.528902 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.552873 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.630300 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8dcddb1b-6dac-420a-9478-7dd29fd6177d-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.630384 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dcddb1b-6dac-420a-9478-7dd29fd6177d-trusted-ca\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.630409 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8dcddb1b-6dac-420a-9478-7dd29fd6177d-registry-certificates\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.630428 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8dcddb1b-6dac-420a-9478-7dd29fd6177d-registry-tls\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.630444 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8dcddb1b-6dac-420a-9478-7dd29fd6177d-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.630458 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b8k66\" (UniqueName: \"kubernetes.io/projected/8dcddb1b-6dac-420a-9478-7dd29fd6177d-kube-api-access-b8k66\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.630520 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8dcddb1b-6dac-420a-9478-7dd29fd6177d-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.630937 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8dcddb1b-6dac-420a-9478-7dd29fd6177d-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.631873 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8dcddb1b-6dac-420a-9478-7dd29fd6177d-registry-certificates\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.631886 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dcddb1b-6dac-420a-9478-7dd29fd6177d-trusted-ca\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.637116 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8dcddb1b-6dac-420a-9478-7dd29fd6177d-registry-tls\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.637486 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8dcddb1b-6dac-420a-9478-7dd29fd6177d-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.647888 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8k66\" (UniqueName: \"kubernetes.io/projected/8dcddb1b-6dac-420a-9478-7dd29fd6177d-kube-api-access-b8k66\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.662581 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8dcddb1b-6dac-420a-9478-7dd29fd6177d-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rdmpz\" (UID: \"8dcddb1b-6dac-420a-9478-7dd29fd6177d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.693721 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.889520 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rdmpz"] Feb 20 00:16:31 crc kubenswrapper[5119]: W0220 00:16:31.895749 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dcddb1b_6dac_420a_9478_7dd29fd6177d.slice/crio-ee264e363400fe28708e4ec2a7f5ba3deb33318b94f6f7f2e29c28db70aa4a5a WatchSource:0}: Error finding container ee264e363400fe28708e4ec2a7f5ba3deb33318b94f6f7f2e29c28db70aa4a5a: Status 404 returned error can't find the container with id ee264e363400fe28708e4ec2a7f5ba3deb33318b94f6f7f2e29c28db70aa4a5a Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.940772 5119 generic.go:358] "Generic (PLEG): container finished" podID="b2314a6d-2b65-453e-8296-608f8e488ff4" containerID="ed318611ba2205bc2c77138cb5a85105ad1f0e81b9daafad7a2e5f1ece4e7569" exitCode=0 Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.940878 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cz9b" event={"ID":"b2314a6d-2b65-453e-8296-608f8e488ff4","Type":"ContainerDied","Data":"ed318611ba2205bc2c77138cb5a85105ad1f0e81b9daafad7a2e5f1ece4e7569"} Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.942003 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" event={"ID":"8dcddb1b-6dac-420a-9478-7dd29fd6177d","Type":"ContainerStarted","Data":"ee264e363400fe28708e4ec2a7f5ba3deb33318b94f6f7f2e29c28db70aa4a5a"} Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.944342 5119 generic.go:358] "Generic (PLEG): container finished" podID="11050770-13c5-418e-b5fc-cc1bec3dc51e" containerID="76e58a95d8cb539ab14f5b4c526cb2b2093b8a0ece2c9e88e81bd0f0cd4a123d" exitCode=0 Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.944468 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nb2n" event={"ID":"11050770-13c5-418e-b5fc-cc1bec3dc51e","Type":"ContainerDied","Data":"76e58a95d8cb539ab14f5b4c526cb2b2093b8a0ece2c9e88e81bd0f0cd4a123d"} Feb 20 00:16:31 crc kubenswrapper[5119]: I0220 00:16:31.944504 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nb2n" event={"ID":"11050770-13c5-418e-b5fc-cc1bec3dc51e","Type":"ContainerStarted","Data":"904a632fbd173b4be3e388a231b051b21eea2420fcb7c82d4581900f5bdb3ee3"} Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.513807 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nc7c2"] Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.525777 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc7c2"] Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.525875 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.535040 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.645383 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldfvj\" (UniqueName: \"kubernetes.io/projected/59b63f17-ee7d-48fc-960d-195081b022c7-kube-api-access-ldfvj\") pod \"community-operators-nc7c2\" (UID: \"59b63f17-ee7d-48fc-960d-195081b022c7\") " pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.645459 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b63f17-ee7d-48fc-960d-195081b022c7-utilities\") pod \"community-operators-nc7c2\" (UID: \"59b63f17-ee7d-48fc-960d-195081b022c7\") " pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.645488 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b63f17-ee7d-48fc-960d-195081b022c7-catalog-content\") pod \"community-operators-nc7c2\" (UID: \"59b63f17-ee7d-48fc-960d-195081b022c7\") " pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.718733 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hqvdk"] Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.723615 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.724369 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqvdk"] Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.725925 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.747100 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b63f17-ee7d-48fc-960d-195081b022c7-utilities\") pod \"community-operators-nc7c2\" (UID: \"59b63f17-ee7d-48fc-960d-195081b022c7\") " pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.747155 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b63f17-ee7d-48fc-960d-195081b022c7-catalog-content\") pod \"community-operators-nc7c2\" (UID: \"59b63f17-ee7d-48fc-960d-195081b022c7\") " pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.747216 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldfvj\" (UniqueName: \"kubernetes.io/projected/59b63f17-ee7d-48fc-960d-195081b022c7-kube-api-access-ldfvj\") pod \"community-operators-nc7c2\" (UID: \"59b63f17-ee7d-48fc-960d-195081b022c7\") " pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.747731 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b63f17-ee7d-48fc-960d-195081b022c7-utilities\") pod \"community-operators-nc7c2\" (UID: \"59b63f17-ee7d-48fc-960d-195081b022c7\") " pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.748600 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b63f17-ee7d-48fc-960d-195081b022c7-catalog-content\") pod \"community-operators-nc7c2\" (UID: \"59b63f17-ee7d-48fc-960d-195081b022c7\") " pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.768225 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldfvj\" (UniqueName: \"kubernetes.io/projected/59b63f17-ee7d-48fc-960d-195081b022c7-kube-api-access-ldfvj\") pod \"community-operators-nc7c2\" (UID: \"59b63f17-ee7d-48fc-960d-195081b022c7\") " pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.849400 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4h27\" (UniqueName: \"kubernetes.io/projected/807ffa59-982e-453f-ba96-3b25858a4b20-kube-api-access-d4h27\") pod \"certified-operators-hqvdk\" (UID: \"807ffa59-982e-453f-ba96-3b25858a4b20\") " pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.849496 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/807ffa59-982e-453f-ba96-3b25858a4b20-catalog-content\") pod \"certified-operators-hqvdk\" (UID: \"807ffa59-982e-453f-ba96-3b25858a4b20\") " pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.849533 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/807ffa59-982e-453f-ba96-3b25858a4b20-utilities\") pod \"certified-operators-hqvdk\" (UID: \"807ffa59-982e-453f-ba96-3b25858a4b20\") " pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.856810 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.952134 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nb2n" event={"ID":"11050770-13c5-418e-b5fc-cc1bec3dc51e","Type":"ContainerStarted","Data":"712ae5f59672a582b21a1364831411d047833ac4969860e8e177318faa789bc2"} Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.954344 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/807ffa59-982e-453f-ba96-3b25858a4b20-utilities\") pod \"certified-operators-hqvdk\" (UID: \"807ffa59-982e-453f-ba96-3b25858a4b20\") " pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.955702 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d4h27\" (UniqueName: \"kubernetes.io/projected/807ffa59-982e-453f-ba96-3b25858a4b20-kube-api-access-d4h27\") pod \"certified-operators-hqvdk\" (UID: \"807ffa59-982e-453f-ba96-3b25858a4b20\") " pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.955886 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/807ffa59-982e-453f-ba96-3b25858a4b20-catalog-content\") pod \"certified-operators-hqvdk\" (UID: \"807ffa59-982e-453f-ba96-3b25858a4b20\") " pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.956390 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/807ffa59-982e-453f-ba96-3b25858a4b20-catalog-content\") pod \"certified-operators-hqvdk\" (UID: \"807ffa59-982e-453f-ba96-3b25858a4b20\") " pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.956498 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/807ffa59-982e-453f-ba96-3b25858a4b20-utilities\") pod \"certified-operators-hqvdk\" (UID: \"807ffa59-982e-453f-ba96-3b25858a4b20\") " pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.957265 5119 generic.go:358] "Generic (PLEG): container finished" podID="b2314a6d-2b65-453e-8296-608f8e488ff4" containerID="b1013ec928968b736564ba0a819b50c515a62ddd46f794bb1bc5d0833444d86d" exitCode=0 Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.957340 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cz9b" event={"ID":"b2314a6d-2b65-453e-8296-608f8e488ff4","Type":"ContainerDied","Data":"b1013ec928968b736564ba0a819b50c515a62ddd46f794bb1bc5d0833444d86d"} Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.962063 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" event={"ID":"8dcddb1b-6dac-420a-9478-7dd29fd6177d","Type":"ContainerStarted","Data":"da540c2d981b2e1bd13e730d5802bb86ac98afbd479e9a33cef67e4984336117"} Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.977312 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4h27\" (UniqueName: \"kubernetes.io/projected/807ffa59-982e-453f-ba96-3b25858a4b20-kube-api-access-d4h27\") pod \"certified-operators-hqvdk\" (UID: \"807ffa59-982e-453f-ba96-3b25858a4b20\") " pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:32 crc kubenswrapper[5119]: I0220 00:16:32.978306 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.022437 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" podStartSLOduration=2.022416867 podStartE2EDuration="2.022416867s" podCreationTimestamp="2026-02-20 00:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:16:33.020269336 +0000 UTC m=+374.999233638" watchObservedRunningTime="2026-02-20 00:16:33.022416867 +0000 UTC m=+375.001381159" Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.045709 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.339646 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc7c2"] Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.435037 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqvdk"] Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.971965 5119 generic.go:358] "Generic (PLEG): container finished" podID="11050770-13c5-418e-b5fc-cc1bec3dc51e" containerID="712ae5f59672a582b21a1364831411d047833ac4969860e8e177318faa789bc2" exitCode=0 Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.972093 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nb2n" event={"ID":"11050770-13c5-418e-b5fc-cc1bec3dc51e","Type":"ContainerDied","Data":"712ae5f59672a582b21a1364831411d047833ac4969860e8e177318faa789bc2"} Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.976009 5119 generic.go:358] "Generic (PLEG): container finished" podID="807ffa59-982e-453f-ba96-3b25858a4b20" containerID="671d7b9843c4cc88f9b2af62b1f9aa31a4a74557bc572e31d6dd28095f399e2d" exitCode=0 Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.976091 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqvdk" event={"ID":"807ffa59-982e-453f-ba96-3b25858a4b20","Type":"ContainerDied","Data":"671d7b9843c4cc88f9b2af62b1f9aa31a4a74557bc572e31d6dd28095f399e2d"} Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.976190 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqvdk" event={"ID":"807ffa59-982e-453f-ba96-3b25858a4b20","Type":"ContainerStarted","Data":"5319a5ae67037d741eb69db329ee7252ac1b3d4cf7ce3d4103b75cbb2d6e877d"} Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.979454 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cz9b" event={"ID":"b2314a6d-2b65-453e-8296-608f8e488ff4","Type":"ContainerStarted","Data":"6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c"} Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.981233 5119 generic.go:358] "Generic (PLEG): container finished" podID="59b63f17-ee7d-48fc-960d-195081b022c7" containerID="113f8fac1dbecf885c4884e1ea9e4df93bcd1cb59315783c942149a6457dcb68" exitCode=0 Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.981318 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc7c2" event={"ID":"59b63f17-ee7d-48fc-960d-195081b022c7","Type":"ContainerDied","Data":"113f8fac1dbecf885c4884e1ea9e4df93bcd1cb59315783c942149a6457dcb68"} Feb 20 00:16:33 crc kubenswrapper[5119]: I0220 00:16:33.981350 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc7c2" event={"ID":"59b63f17-ee7d-48fc-960d-195081b022c7","Type":"ContainerStarted","Data":"e5922b5ac5ff9576466cc9dd3a8db8e20680dac1cdbe8c1c08f4cc2b25f664a2"} Feb 20 00:16:34 crc kubenswrapper[5119]: I0220 00:16:34.091161 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8cz9b" podStartSLOduration=3.483099243 podStartE2EDuration="4.091131332s" podCreationTimestamp="2026-02-20 00:16:30 +0000 UTC" firstStartedPulling="2026-02-20 00:16:31.941951508 +0000 UTC m=+373.920915800" lastFinishedPulling="2026-02-20 00:16:32.549983597 +0000 UTC m=+374.528947889" observedRunningTime="2026-02-20 00:16:34.086682287 +0000 UTC m=+376.065646589" watchObservedRunningTime="2026-02-20 00:16:34.091131332 +0000 UTC m=+376.070095614" Feb 20 00:16:34 crc kubenswrapper[5119]: I0220 00:16:34.991611 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc7c2" event={"ID":"59b63f17-ee7d-48fc-960d-195081b022c7","Type":"ContainerStarted","Data":"af806515799ce049ccd7546faaf89cfb2e116e18ab1ac8231b8a0f925b345478"} Feb 20 00:16:34 crc kubenswrapper[5119]: I0220 00:16:34.995531 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nb2n" event={"ID":"11050770-13c5-418e-b5fc-cc1bec3dc51e","Type":"ContainerStarted","Data":"b193cef1e9130130b45d791bd91b8161d5d7e84430ea964d5144754703e28c00"} Feb 20 00:16:35 crc kubenswrapper[5119]: I0220 00:16:35.030162 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5nb2n" podStartSLOduration=4.39578521 podStartE2EDuration="5.030144045s" podCreationTimestamp="2026-02-20 00:16:30 +0000 UTC" firstStartedPulling="2026-02-20 00:16:31.94694745 +0000 UTC m=+373.925911792" lastFinishedPulling="2026-02-20 00:16:32.581306335 +0000 UTC m=+374.560270627" observedRunningTime="2026-02-20 00:16:35.02641611 +0000 UTC m=+377.005380402" watchObservedRunningTime="2026-02-20 00:16:35.030144045 +0000 UTC m=+377.009108337" Feb 20 00:16:36 crc kubenswrapper[5119]: I0220 00:16:36.002442 5119 generic.go:358] "Generic (PLEG): container finished" podID="59b63f17-ee7d-48fc-960d-195081b022c7" containerID="af806515799ce049ccd7546faaf89cfb2e116e18ab1ac8231b8a0f925b345478" exitCode=0 Feb 20 00:16:36 crc kubenswrapper[5119]: I0220 00:16:36.002513 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc7c2" event={"ID":"59b63f17-ee7d-48fc-960d-195081b022c7","Type":"ContainerDied","Data":"af806515799ce049ccd7546faaf89cfb2e116e18ab1ac8231b8a0f925b345478"} Feb 20 00:16:36 crc kubenswrapper[5119]: I0220 00:16:36.006680 5119 generic.go:358] "Generic (PLEG): container finished" podID="807ffa59-982e-453f-ba96-3b25858a4b20" containerID="e91efc53cf855fab1c3d9b74ef22ad20554bfad297e4c309da3ddbf0c352c0b9" exitCode=0 Feb 20 00:16:36 crc kubenswrapper[5119]: I0220 00:16:36.006818 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqvdk" event={"ID":"807ffa59-982e-453f-ba96-3b25858a4b20","Type":"ContainerDied","Data":"e91efc53cf855fab1c3d9b74ef22ad20554bfad297e4c309da3ddbf0c352c0b9"} Feb 20 00:16:37 crc kubenswrapper[5119]: I0220 00:16:37.013664 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc7c2" event={"ID":"59b63f17-ee7d-48fc-960d-195081b022c7","Type":"ContainerStarted","Data":"207138a92093b513ded62020396654745af296b64e879337098d41be2c3248fb"} Feb 20 00:16:37 crc kubenswrapper[5119]: I0220 00:16:37.022908 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqvdk" event={"ID":"807ffa59-982e-453f-ba96-3b25858a4b20","Type":"ContainerStarted","Data":"22599aff316b9df99436e468ffe8e9519b9b3a9c742a3346b28a4fdf879a9f9b"} Feb 20 00:16:37 crc kubenswrapper[5119]: I0220 00:16:37.040400 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nc7c2" podStartSLOduration=4.335937373 podStartE2EDuration="5.040381003s" podCreationTimestamp="2026-02-20 00:16:32 +0000 UTC" firstStartedPulling="2026-02-20 00:16:33.982678371 +0000 UTC m=+375.961642693" lastFinishedPulling="2026-02-20 00:16:34.687122031 +0000 UTC m=+376.666086323" observedRunningTime="2026-02-20 00:16:37.03638998 +0000 UTC m=+379.015354292" watchObservedRunningTime="2026-02-20 00:16:37.040381003 +0000 UTC m=+379.019345305" Feb 20 00:16:37 crc kubenswrapper[5119]: I0220 00:16:37.061188 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hqvdk" podStartSLOduration=4.223719106 podStartE2EDuration="5.061168512s" podCreationTimestamp="2026-02-20 00:16:32 +0000 UTC" firstStartedPulling="2026-02-20 00:16:33.976715952 +0000 UTC m=+375.955680234" lastFinishedPulling="2026-02-20 00:16:34.814165308 +0000 UTC m=+376.793129640" observedRunningTime="2026-02-20 00:16:37.057936851 +0000 UTC m=+379.036901143" watchObservedRunningTime="2026-02-20 00:16:37.061168512 +0000 UTC m=+379.040132804" Feb 20 00:16:40 crc kubenswrapper[5119]: I0220 00:16:40.455554 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:40 crc kubenswrapper[5119]: I0220 00:16:40.457364 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:40 crc kubenswrapper[5119]: I0220 00:16:40.505600 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:40 crc kubenswrapper[5119]: I0220 00:16:40.675636 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:40 crc kubenswrapper[5119]: I0220 00:16:40.676143 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:40 crc kubenswrapper[5119]: I0220 00:16:40.743655 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:41 crc kubenswrapper[5119]: I0220 00:16:41.092136 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:16:41 crc kubenswrapper[5119]: I0220 00:16:41.095374 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5nb2n" Feb 20 00:16:42 crc kubenswrapper[5119]: I0220 00:16:42.160646 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:16:42 crc kubenswrapper[5119]: I0220 00:16:42.161043 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:16:42 crc kubenswrapper[5119]: I0220 00:16:42.865140 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:42 crc kubenswrapper[5119]: I0220 00:16:42.865197 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:42 crc kubenswrapper[5119]: I0220 00:16:42.900283 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:43 crc kubenswrapper[5119]: I0220 00:16:43.115111 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:43 crc kubenswrapper[5119]: I0220 00:16:43.115739 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:43 crc kubenswrapper[5119]: I0220 00:16:43.154943 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:43 crc kubenswrapper[5119]: I0220 00:16:43.160738 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nc7c2" Feb 20 00:16:44 crc kubenswrapper[5119]: I0220 00:16:44.179739 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hqvdk" Feb 20 00:16:55 crc kubenswrapper[5119]: I0220 00:16:55.001644 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rdmpz" Feb 20 00:16:55 crc kubenswrapper[5119]: I0220 00:16:55.068020 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qh5n2"] Feb 20 00:17:12 crc kubenswrapper[5119]: I0220 00:17:12.161121 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:17:12 crc kubenswrapper[5119]: I0220 00:17:12.161962 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.105920 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" podUID="6ac5e92f-06f1-4557-8ab1-0a48d313b01c" containerName="registry" containerID="cri-o://7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5" gracePeriod=30 Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.522196 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.560697 5119 generic.go:358] "Generic (PLEG): container finished" podID="6ac5e92f-06f1-4557-8ab1-0a48d313b01c" containerID="7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5" exitCode=0 Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.560785 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" event={"ID":"6ac5e92f-06f1-4557-8ab1-0a48d313b01c","Type":"ContainerDied","Data":"7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5"} Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.560813 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.560834 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-qh5n2" event={"ID":"6ac5e92f-06f1-4557-8ab1-0a48d313b01c","Type":"ContainerDied","Data":"3ccf782f2d3341ec45ad1358bec62903bef370ee33a60a08b53fc1d9bddeffbb"} Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.560851 5119 scope.go:117] "RemoveContainer" containerID="7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.587633 5119 scope.go:117] "RemoveContainer" containerID="7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5" Feb 20 00:17:20 crc kubenswrapper[5119]: E0220 00:17:20.588439 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5\": container with ID starting with 7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5 not found: ID does not exist" containerID="7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.588476 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5"} err="failed to get container status \"7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5\": rpc error: code = NotFound desc = could not find container \"7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5\": container with ID starting with 7d7b53517a8b5b89b510635274583c4d394183c0fc9dbab20add28852f213de5 not found: ID does not exist" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.689890 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-installation-pull-secrets\") pod \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.690503 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-bound-sa-token\") pod \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.690756 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-ca-trust-extracted\") pod \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.690849 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-tls\") pod \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.690905 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-certificates\") pod \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.690968 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-trusted-ca\") pod \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.691187 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.691248 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bqq9\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-kube-api-access-7bqq9\") pod \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\" (UID: \"6ac5e92f-06f1-4557-8ab1-0a48d313b01c\") " Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.692817 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6ac5e92f-06f1-4557-8ab1-0a48d313b01c" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.693575 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "6ac5e92f-06f1-4557-8ab1-0a48d313b01c" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.703042 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-kube-api-access-7bqq9" (OuterVolumeSpecName: "kube-api-access-7bqq9") pod "6ac5e92f-06f1-4557-8ab1-0a48d313b01c" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c"). InnerVolumeSpecName "kube-api-access-7bqq9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.703129 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "6ac5e92f-06f1-4557-8ab1-0a48d313b01c" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.703394 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "6ac5e92f-06f1-4557-8ab1-0a48d313b01c" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.709697 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "6ac5e92f-06f1-4557-8ab1-0a48d313b01c" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.712247 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "6ac5e92f-06f1-4557-8ab1-0a48d313b01c" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.720811 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "6ac5e92f-06f1-4557-8ab1-0a48d313b01c" (UID: "6ac5e92f-06f1-4557-8ab1-0a48d313b01c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.793219 5119 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.793278 5119 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.793288 5119 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.793329 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.793341 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7bqq9\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-kube-api-access-7bqq9\") on node \"crc\" DevicePath \"\"" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.793351 5119 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.793360 5119 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ac5e92f-06f1-4557-8ab1-0a48d313b01c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.922999 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qh5n2"] Feb 20 00:17:20 crc kubenswrapper[5119]: I0220 00:17:20.928196 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-qh5n2"] Feb 20 00:17:22 crc kubenswrapper[5119]: I0220 00:17:22.869322 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac5e92f-06f1-4557-8ab1-0a48d313b01c" path="/var/lib/kubelet/pods/6ac5e92f-06f1-4557-8ab1-0a48d313b01c/volumes" Feb 20 00:17:42 crc kubenswrapper[5119]: I0220 00:17:42.161443 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:17:42 crc kubenswrapper[5119]: I0220 00:17:42.162110 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:17:42 crc kubenswrapper[5119]: I0220 00:17:42.162179 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:17:42 crc kubenswrapper[5119]: I0220 00:17:42.163234 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7379ef2df3511a5b4a842ddfae4a6c59f8bb16e8775afbafe4bb7b62e106daae"} pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:17:42 crc kubenswrapper[5119]: I0220 00:17:42.163374 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" containerID="cri-o://7379ef2df3511a5b4a842ddfae4a6c59f8bb16e8775afbafe4bb7b62e106daae" gracePeriod=600 Feb 20 00:17:42 crc kubenswrapper[5119]: I0220 00:17:42.723073 5119 generic.go:358] "Generic (PLEG): container finished" podID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerID="7379ef2df3511a5b4a842ddfae4a6c59f8bb16e8775afbafe4bb7b62e106daae" exitCode=0 Feb 20 00:17:42 crc kubenswrapper[5119]: I0220 00:17:42.723161 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerDied","Data":"7379ef2df3511a5b4a842ddfae4a6c59f8bb16e8775afbafe4bb7b62e106daae"} Feb 20 00:17:42 crc kubenswrapper[5119]: I0220 00:17:42.723954 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerStarted","Data":"eb031711d14c34fef15b7fa26c19329bf458b6e44e95d8bdc0966d1b83c33a00"} Feb 20 00:17:42 crc kubenswrapper[5119]: I0220 00:17:42.724014 5119 scope.go:117] "RemoveContainer" containerID="e72863f3bb34d69a32e4bf16d58f08f3318fc63f4aed8833baffafd71c833abb" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.151399 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525778-pvrq6"] Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.153412 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ac5e92f-06f1-4557-8ab1-0a48d313b01c" containerName="registry" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.153443 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac5e92f-06f1-4557-8ab1-0a48d313b01c" containerName="registry" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.153691 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="6ac5e92f-06f1-4557-8ab1-0a48d313b01c" containerName="registry" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.161050 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525778-pvrq6"] Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.161221 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525778-pvrq6" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.164441 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.164712 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.164955 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-nmc85\"" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.304071 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qc2s\" (UniqueName: \"kubernetes.io/projected/e246885f-c45f-4a8a-879d-27add555cb0b-kube-api-access-5qc2s\") pod \"auto-csr-approver-29525778-pvrq6\" (UID: \"e246885f-c45f-4a8a-879d-27add555cb0b\") " pod="openshift-infra/auto-csr-approver-29525778-pvrq6" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.405724 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5qc2s\" (UniqueName: \"kubernetes.io/projected/e246885f-c45f-4a8a-879d-27add555cb0b-kube-api-access-5qc2s\") pod \"auto-csr-approver-29525778-pvrq6\" (UID: \"e246885f-c45f-4a8a-879d-27add555cb0b\") " pod="openshift-infra/auto-csr-approver-29525778-pvrq6" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.428880 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qc2s\" (UniqueName: \"kubernetes.io/projected/e246885f-c45f-4a8a-879d-27add555cb0b-kube-api-access-5qc2s\") pod \"auto-csr-approver-29525778-pvrq6\" (UID: \"e246885f-c45f-4a8a-879d-27add555cb0b\") " pod="openshift-infra/auto-csr-approver-29525778-pvrq6" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.501104 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525778-pvrq6" Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.689959 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525778-pvrq6"] Feb 20 00:18:00 crc kubenswrapper[5119]: I0220 00:18:00.870971 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525778-pvrq6" event={"ID":"e246885f-c45f-4a8a-879d-27add555cb0b","Type":"ContainerStarted","Data":"c9964c32723c9f1334bab50287ac85fd48d4f74222fa1dbb74c5a398241ceb2c"} Feb 20 00:18:04 crc kubenswrapper[5119]: I0220 00:18:04.898106 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525778-pvrq6" event={"ID":"e246885f-c45f-4a8a-879d-27add555cb0b","Type":"ContainerStarted","Data":"c579104b50b04386a0930e094cc8fb30b667dfc6862c0c78aee0860c53152c04"} Feb 20 00:18:04 crc kubenswrapper[5119]: I0220 00:18:04.922338 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29525778-pvrq6" podStartSLOduration=1.235154813 podStartE2EDuration="4.922319809s" podCreationTimestamp="2026-02-20 00:18:00 +0000 UTC" firstStartedPulling="2026-02-20 00:18:00.701617862 +0000 UTC m=+462.680582154" lastFinishedPulling="2026-02-20 00:18:04.388782848 +0000 UTC m=+466.367747150" observedRunningTime="2026-02-20 00:18:04.92126195 +0000 UTC m=+466.900226282" watchObservedRunningTime="2026-02-20 00:18:04.922319809 +0000 UTC m=+466.901284111" Feb 20 00:18:05 crc kubenswrapper[5119]: I0220 00:18:05.243945 5119 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-pkr68" Feb 20 00:18:05 crc kubenswrapper[5119]: I0220 00:18:05.281189 5119 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-pkr68" Feb 20 00:18:05 crc kubenswrapper[5119]: E0220 00:18:05.318994 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode246885f_c45f_4a8a_879d_27add555cb0b.slice/crio-conmon-c579104b50b04386a0930e094cc8fb30b667dfc6862c0c78aee0860c53152c04.scope\": RecentStats: unable to find data in memory cache]" Feb 20 00:18:05 crc kubenswrapper[5119]: I0220 00:18:05.906199 5119 generic.go:358] "Generic (PLEG): container finished" podID="e246885f-c45f-4a8a-879d-27add555cb0b" containerID="c579104b50b04386a0930e094cc8fb30b667dfc6862c0c78aee0860c53152c04" exitCode=0 Feb 20 00:18:05 crc kubenswrapper[5119]: I0220 00:18:05.906373 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525778-pvrq6" event={"ID":"e246885f-c45f-4a8a-879d-27add555cb0b","Type":"ContainerDied","Data":"c579104b50b04386a0930e094cc8fb30b667dfc6862c0c78aee0860c53152c04"} Feb 20 00:18:06 crc kubenswrapper[5119]: I0220 00:18:06.282736 5119 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-22 00:13:05 +0000 UTC" deadline="2026-03-17 03:22:05.467028688 +0000 UTC" Feb 20 00:18:06 crc kubenswrapper[5119]: I0220 00:18:06.282798 5119 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="603h3m59.184236356s" Feb 20 00:18:07 crc kubenswrapper[5119]: I0220 00:18:07.191526 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525778-pvrq6" Feb 20 00:18:07 crc kubenswrapper[5119]: I0220 00:18:07.215074 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qc2s\" (UniqueName: \"kubernetes.io/projected/e246885f-c45f-4a8a-879d-27add555cb0b-kube-api-access-5qc2s\") pod \"e246885f-c45f-4a8a-879d-27add555cb0b\" (UID: \"e246885f-c45f-4a8a-879d-27add555cb0b\") " Feb 20 00:18:07 crc kubenswrapper[5119]: I0220 00:18:07.225522 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e246885f-c45f-4a8a-879d-27add555cb0b-kube-api-access-5qc2s" (OuterVolumeSpecName: "kube-api-access-5qc2s") pod "e246885f-c45f-4a8a-879d-27add555cb0b" (UID: "e246885f-c45f-4a8a-879d-27add555cb0b"). InnerVolumeSpecName "kube-api-access-5qc2s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:18:07 crc kubenswrapper[5119]: I0220 00:18:07.283936 5119 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-22 00:13:05 +0000 UTC" deadline="2026-03-13 15:29:06.799125222 +0000 UTC" Feb 20 00:18:07 crc kubenswrapper[5119]: I0220 00:18:07.284007 5119 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="519h10m59.515127128s" Feb 20 00:18:07 crc kubenswrapper[5119]: I0220 00:18:07.316983 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5qc2s\" (UniqueName: \"kubernetes.io/projected/e246885f-c45f-4a8a-879d-27add555cb0b-kube-api-access-5qc2s\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:07 crc kubenswrapper[5119]: I0220 00:18:07.924839 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525778-pvrq6" Feb 20 00:18:07 crc kubenswrapper[5119]: I0220 00:18:07.924872 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525778-pvrq6" event={"ID":"e246885f-c45f-4a8a-879d-27add555cb0b","Type":"ContainerDied","Data":"c9964c32723c9f1334bab50287ac85fd48d4f74222fa1dbb74c5a398241ceb2c"} Feb 20 00:18:07 crc kubenswrapper[5119]: I0220 00:18:07.924947 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9964c32723c9f1334bab50287ac85fd48d4f74222fa1dbb74c5a398241ceb2c" Feb 20 00:18:19 crc kubenswrapper[5119]: I0220 00:18:19.249730 5119 scope.go:117] "RemoveContainer" containerID="a1055600a504f918cfaf9eb422d6d77c30577ba34c68084254e7d3d399465774" Feb 20 00:18:19 crc kubenswrapper[5119]: I0220 00:18:19.288611 5119 scope.go:117] "RemoveContainer" containerID="d4f7da01f3dc86caf7fd824b6f35836203cc49897c458c262289c2d45b168504" Feb 20 00:19:19 crc kubenswrapper[5119]: I0220 00:19:19.389585 5119 scope.go:117] "RemoveContainer" containerID="6ac7f7b9c00dc82eab3e9c77207268a7cc473025f63fc8099e0d1198ee74a390" Feb 20 00:19:19 crc kubenswrapper[5119]: I0220 00:19:19.414561 5119 scope.go:117] "RemoveContainer" containerID="bd1887dc4f63c79c3ac660eaeb0a5968c299febe679557707ea144ee7f764415" Feb 20 00:19:42 crc kubenswrapper[5119]: I0220 00:19:42.160866 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:19:42 crc kubenswrapper[5119]: I0220 00:19:42.162928 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.132898 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525780-c78v7"] Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.135042 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e246885f-c45f-4a8a-879d-27add555cb0b" containerName="oc" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.135061 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e246885f-c45f-4a8a-879d-27add555cb0b" containerName="oc" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.135175 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e246885f-c45f-4a8a-879d-27add555cb0b" containerName="oc" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.279362 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525780-c78v7"] Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.279522 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525780-c78v7" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.284103 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.284629 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.284971 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-nmc85\"" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.379756 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgrb4\" (UniqueName: \"kubernetes.io/projected/5bb94ffb-0736-4317-979f-e6be8f6cf7d9-kube-api-access-xgrb4\") pod \"auto-csr-approver-29525780-c78v7\" (UID: \"5bb94ffb-0736-4317-979f-e6be8f6cf7d9\") " pod="openshift-infra/auto-csr-approver-29525780-c78v7" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.481146 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xgrb4\" (UniqueName: \"kubernetes.io/projected/5bb94ffb-0736-4317-979f-e6be8f6cf7d9-kube-api-access-xgrb4\") pod \"auto-csr-approver-29525780-c78v7\" (UID: \"5bb94ffb-0736-4317-979f-e6be8f6cf7d9\") " pod="openshift-infra/auto-csr-approver-29525780-c78v7" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.514007 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgrb4\" (UniqueName: \"kubernetes.io/projected/5bb94ffb-0736-4317-979f-e6be8f6cf7d9-kube-api-access-xgrb4\") pod \"auto-csr-approver-29525780-c78v7\" (UID: \"5bb94ffb-0736-4317-979f-e6be8f6cf7d9\") " pod="openshift-infra/auto-csr-approver-29525780-c78v7" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.606661 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525780-c78v7" Feb 20 00:20:00 crc kubenswrapper[5119]: I0220 00:20:00.863020 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525780-c78v7"] Feb 20 00:20:01 crc kubenswrapper[5119]: I0220 00:20:01.749130 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525780-c78v7" event={"ID":"5bb94ffb-0736-4317-979f-e6be8f6cf7d9","Type":"ContainerStarted","Data":"2e11ef65fac77b1877024b58b6b1130922eae452a1274970e88ef263266441a2"} Feb 20 00:20:02 crc kubenswrapper[5119]: I0220 00:20:02.757508 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525780-c78v7" event={"ID":"5bb94ffb-0736-4317-979f-e6be8f6cf7d9","Type":"ContainerStarted","Data":"51463116e778b4174ca20273762680cc39cc30c6b2de294759e498fb64a5f71d"} Feb 20 00:20:02 crc kubenswrapper[5119]: I0220 00:20:02.776417 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29525780-c78v7" podStartSLOduration=1.410324905 podStartE2EDuration="2.776402546s" podCreationTimestamp="2026-02-20 00:20:00 +0000 UTC" firstStartedPulling="2026-02-20 00:20:00.870090198 +0000 UTC m=+582.849054510" lastFinishedPulling="2026-02-20 00:20:02.236167809 +0000 UTC m=+584.215132151" observedRunningTime="2026-02-20 00:20:02.772591965 +0000 UTC m=+584.751556347" watchObservedRunningTime="2026-02-20 00:20:02.776402546 +0000 UTC m=+584.755366838" Feb 20 00:20:03 crc kubenswrapper[5119]: I0220 00:20:03.766030 5119 generic.go:358] "Generic (PLEG): container finished" podID="5bb94ffb-0736-4317-979f-e6be8f6cf7d9" containerID="51463116e778b4174ca20273762680cc39cc30c6b2de294759e498fb64a5f71d" exitCode=0 Feb 20 00:20:03 crc kubenswrapper[5119]: I0220 00:20:03.766146 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525780-c78v7" event={"ID":"5bb94ffb-0736-4317-979f-e6be8f6cf7d9","Type":"ContainerDied","Data":"51463116e778b4174ca20273762680cc39cc30c6b2de294759e498fb64a5f71d"} Feb 20 00:20:05 crc kubenswrapper[5119]: I0220 00:20:05.102410 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525780-c78v7" Feb 20 00:20:05 crc kubenswrapper[5119]: I0220 00:20:05.156604 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgrb4\" (UniqueName: \"kubernetes.io/projected/5bb94ffb-0736-4317-979f-e6be8f6cf7d9-kube-api-access-xgrb4\") pod \"5bb94ffb-0736-4317-979f-e6be8f6cf7d9\" (UID: \"5bb94ffb-0736-4317-979f-e6be8f6cf7d9\") " Feb 20 00:20:05 crc kubenswrapper[5119]: I0220 00:20:05.165774 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb94ffb-0736-4317-979f-e6be8f6cf7d9-kube-api-access-xgrb4" (OuterVolumeSpecName: "kube-api-access-xgrb4") pod "5bb94ffb-0736-4317-979f-e6be8f6cf7d9" (UID: "5bb94ffb-0736-4317-979f-e6be8f6cf7d9"). InnerVolumeSpecName "kube-api-access-xgrb4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:20:05 crc kubenswrapper[5119]: I0220 00:20:05.258574 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xgrb4\" (UniqueName: \"kubernetes.io/projected/5bb94ffb-0736-4317-979f-e6be8f6cf7d9-kube-api-access-xgrb4\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:05 crc kubenswrapper[5119]: I0220 00:20:05.781861 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525780-c78v7" event={"ID":"5bb94ffb-0736-4317-979f-e6be8f6cf7d9","Type":"ContainerDied","Data":"2e11ef65fac77b1877024b58b6b1130922eae452a1274970e88ef263266441a2"} Feb 20 00:20:05 crc kubenswrapper[5119]: I0220 00:20:05.782408 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e11ef65fac77b1877024b58b6b1130922eae452a1274970e88ef263266441a2" Feb 20 00:20:05 crc kubenswrapper[5119]: I0220 00:20:05.781906 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525780-c78v7" Feb 20 00:20:12 crc kubenswrapper[5119]: I0220 00:20:12.161186 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:20:12 crc kubenswrapper[5119]: I0220 00:20:12.161593 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:20:19 crc kubenswrapper[5119]: I0220 00:20:19.148086 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:20:19 crc kubenswrapper[5119]: I0220 00:20:19.148669 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:20:42 crc kubenswrapper[5119]: I0220 00:20:42.161062 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:20:42 crc kubenswrapper[5119]: I0220 00:20:42.161951 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:20:42 crc kubenswrapper[5119]: I0220 00:20:42.162025 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:20:42 crc kubenswrapper[5119]: I0220 00:20:42.163919 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb031711d14c34fef15b7fa26c19329bf458b6e44e95d8bdc0966d1b83c33a00"} pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:20:42 crc kubenswrapper[5119]: I0220 00:20:42.163997 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" containerID="cri-o://eb031711d14c34fef15b7fa26c19329bf458b6e44e95d8bdc0966d1b83c33a00" gracePeriod=600 Feb 20 00:20:42 crc kubenswrapper[5119]: I0220 00:20:42.303062 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 00:20:43 crc kubenswrapper[5119]: I0220 00:20:43.090765 5119 generic.go:358] "Generic (PLEG): container finished" podID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerID="eb031711d14c34fef15b7fa26c19329bf458b6e44e95d8bdc0966d1b83c33a00" exitCode=0 Feb 20 00:20:43 crc kubenswrapper[5119]: I0220 00:20:43.090813 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerDied","Data":"eb031711d14c34fef15b7fa26c19329bf458b6e44e95d8bdc0966d1b83c33a00"} Feb 20 00:20:43 crc kubenswrapper[5119]: I0220 00:20:43.091669 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerStarted","Data":"6e692bfc0f8e3640cdfb629db9ce0f6fdd7db4e721f07aacfb3653d9f3057c7c"} Feb 20 00:20:43 crc kubenswrapper[5119]: I0220 00:20:43.091700 5119 scope.go:117] "RemoveContainer" containerID="7379ef2df3511a5b4a842ddfae4a6c59f8bb16e8775afbafe4bb7b62e106daae" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.079294 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj"] Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.080745 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" podUID="a9bc4f8d-447f-4dd2-a865-6fd066513b13" containerName="kube-rbac-proxy" containerID="cri-o://22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b" gracePeriod=30 Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.080827 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" podUID="a9bc4f8d-447f-4dd2-a865-6fd066513b13" containerName="ovnkube-cluster-manager" containerID="cri-o://2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c" gracePeriod=30 Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.276298 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m42rs"] Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.276874 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovn-controller" containerID="cri-o://cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218" gracePeriod=30 Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.277378 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="nbdb" containerID="cri-o://f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47" gracePeriod=30 Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.276952 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a" gracePeriod=30 Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.276987 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="northd" containerID="cri-o://ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd" gracePeriod=30 Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.276999 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovn-acl-logging" containerID="cri-o://3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b" gracePeriod=30 Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.277099 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="sbdb" containerID="cri-o://b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e" gracePeriod=30 Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.276947 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="kube-rbac-proxy-node" containerID="cri-o://010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b" gracePeriod=30 Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.349078 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovnkube-controller" containerID="cri-o://674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb" gracePeriod=30 Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.351207 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.384348 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q"] Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.384976 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9bc4f8d-447f-4dd2-a865-6fd066513b13" containerName="kube-rbac-proxy" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.385000 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bc4f8d-447f-4dd2-a865-6fd066513b13" containerName="kube-rbac-proxy" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.385019 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5bb94ffb-0736-4317-979f-e6be8f6cf7d9" containerName="oc" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.385025 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb94ffb-0736-4317-979f-e6be8f6cf7d9" containerName="oc" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.385037 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9bc4f8d-447f-4dd2-a865-6fd066513b13" containerName="ovnkube-cluster-manager" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.385043 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bc4f8d-447f-4dd2-a865-6fd066513b13" containerName="ovnkube-cluster-manager" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.385149 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="a9bc4f8d-447f-4dd2-a865-6fd066513b13" containerName="ovnkube-cluster-manager" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.385163 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="5bb94ffb-0736-4317-979f-e6be8f6cf7d9" containerName="oc" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.385172 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="a9bc4f8d-447f-4dd2-a865-6fd066513b13" containerName="kube-rbac-proxy" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.392655 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.441229 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wn84\" (UniqueName: \"kubernetes.io/projected/a9bc4f8d-447f-4dd2-a865-6fd066513b13-kube-api-access-4wn84\") pod \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.441297 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-env-overrides\") pod \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.441344 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovn-control-plane-metrics-cert\") pod \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.441393 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovnkube-config\") pod \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\" (UID: \"a9bc4f8d-447f-4dd2-a865-6fd066513b13\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.441639 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c57e2d39-d756-4d66-85f5-d5b578e412e8-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.441669 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c57e2d39-d756-4d66-85f5-d5b578e412e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.441685 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtslg\" (UniqueName: \"kubernetes.io/projected/c57e2d39-d756-4d66-85f5-d5b578e412e8-kube-api-access-vtslg\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.441710 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c57e2d39-d756-4d66-85f5-d5b578e412e8-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.444419 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a9bc4f8d-447f-4dd2-a865-6fd066513b13" (UID: "a9bc4f8d-447f-4dd2-a865-6fd066513b13"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.444759 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a9bc4f8d-447f-4dd2-a865-6fd066513b13" (UID: "a9bc4f8d-447f-4dd2-a865-6fd066513b13"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.448735 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "a9bc4f8d-447f-4dd2-a865-6fd066513b13" (UID: "a9bc4f8d-447f-4dd2-a865-6fd066513b13"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.448940 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9bc4f8d-447f-4dd2-a865-6fd066513b13-kube-api-access-4wn84" (OuterVolumeSpecName: "kube-api-access-4wn84") pod "a9bc4f8d-447f-4dd2-a865-6fd066513b13" (UID: "a9bc4f8d-447f-4dd2-a865-6fd066513b13"). InnerVolumeSpecName "kube-api-access-4wn84". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.543367 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c57e2d39-d756-4d66-85f5-d5b578e412e8-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.543432 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c57e2d39-d756-4d66-85f5-d5b578e412e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.543458 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vtslg\" (UniqueName: \"kubernetes.io/projected/c57e2d39-d756-4d66-85f5-d5b578e412e8-kube-api-access-vtslg\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.543493 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c57e2d39-d756-4d66-85f5-d5b578e412e8-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.544512 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c57e2d39-d756-4d66-85f5-d5b578e412e8-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.544510 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c57e2d39-d756-4d66-85f5-d5b578e412e8-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.544637 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4wn84\" (UniqueName: \"kubernetes.io/projected/a9bc4f8d-447f-4dd2-a865-6fd066513b13-kube-api-access-4wn84\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.544656 5119 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.544668 5119 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.544681 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9bc4f8d-447f-4dd2-a865-6fd066513b13-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.549467 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c57e2d39-d756-4d66-85f5-d5b578e412e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.561793 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtslg\" (UniqueName: \"kubernetes.io/projected/c57e2d39-d756-4d66-85f5-d5b578e412e8-kube-api-access-vtslg\") pod \"ovnkube-control-plane-97c9b6c48-tqh6q\" (UID: \"c57e2d39-d756-4d66-85f5-d5b578e412e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.619380 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m42rs_4cd7895c-04ad-4f77-80dc-54fd6ed54e89/ovn-acl-logging/0.log" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.620222 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m42rs_4cd7895c-04ad-4f77-80dc-54fd6ed54e89/ovn-controller/0.log" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.620622 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.645108 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-openvswitch\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.645313 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.645492 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-netd\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.645224 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.645397 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.645603 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.645833 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-slash" (OuterVolumeSpecName: "host-slash") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.645696 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-slash\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.645921 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-script-lib\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.646084 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.646681 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-netns\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.646846 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-env-overrides\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.646910 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-bin\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.646978 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-etc-openvswitch\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647037 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-966tq\" (UniqueName: \"kubernetes.io/projected/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-kube-api-access-966tq\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647087 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-kubelet\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647140 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647181 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-systemd-units\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647210 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647223 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647249 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-ovn-kubernetes\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647341 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-var-lib-openvswitch\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647338 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647345 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647392 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-node-log\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647429 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647449 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-ovn\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647451 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647477 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-node-log" (OuterVolumeSpecName: "node-log") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647532 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-config\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647596 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647683 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-log-socket\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647725 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-log-socket" (OuterVolumeSpecName: "log-socket") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647776 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647798 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-systemd\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.647873 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovn-node-metrics-cert\") pod \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\" (UID: \"4cd7895c-04ad-4f77-80dc-54fd6ed54e89\") " Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648348 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648611 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648638 5119 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-log-socket\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648657 5119 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648677 5119 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648699 5119 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648718 5119 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-slash\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648736 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648756 5119 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648775 5119 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648793 5119 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648813 5119 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648835 5119 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648851 5119 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648867 5119 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648883 5119 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648900 5119 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-node-log\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.648915 5119 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.652753 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-kube-api-access-966tq" (OuterVolumeSpecName: "kube-api-access-966tq") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "kube-api-access-966tq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.654425 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.686516 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4cd7895c-04ad-4f77-80dc-54fd6ed54e89" (UID: "4cd7895c-04ad-4f77-80dc-54fd6ed54e89"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.752254 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-966tq\" (UniqueName: \"kubernetes.io/projected/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-kube-api-access-966tq\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.752292 5119 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.752302 5119 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4cd7895c-04ad-4f77-80dc-54fd6ed54e89-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.752881 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f78hk"] Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753658 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="kube-rbac-proxy-node" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753679 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="kube-rbac-proxy-node" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753693 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovn-controller" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753701 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovn-controller" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753708 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="northd" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753714 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="northd" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753722 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753728 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753735 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovnkube-controller" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753741 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovnkube-controller" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753752 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="nbdb" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753757 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="nbdb" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753763 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="sbdb" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753768 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="sbdb" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753790 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovn-acl-logging" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753796 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovn-acl-logging" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753807 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="kubecfg-setup" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753812 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="kubecfg-setup" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753908 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovn-acl-logging" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753919 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="nbdb" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753925 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753934 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovnkube-controller" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753940 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="ovn-controller" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753947 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="northd" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753956 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="kube-rbac-proxy-node" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.753964 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerName="sbdb" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.760614 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.766530 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853768 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-kubelet\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853806 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-etc-openvswitch\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853830 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-run-ovn\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853846 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-run-ovn-kubernetes\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853865 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-cni-bin\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853880 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-log-socket\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853896 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f57a2f4-eb53-4719-9d6f-37672879f34c-ovn-node-metrics-cert\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853910 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1f57a2f4-eb53-4719-9d6f-37672879f34c-ovnkube-script-lib\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853930 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853945 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f57a2f4-eb53-4719-9d6f-37672879f34c-ovnkube-config\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853958 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-cni-netd\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853974 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-systemd-units\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.853993 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-run-netns\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.854010 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-run-systemd\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.854024 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-run-openvswitch\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.854049 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f57a2f4-eb53-4719-9d6f-37672879f34c-env-overrides\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.854076 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wchpv\" (UniqueName: \"kubernetes.io/projected/1f57a2f4-eb53-4719-9d6f-37672879f34c-kube-api-access-wchpv\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.854125 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-slash\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.854150 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-var-lib-openvswitch\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.854164 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-node-log\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955384 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-kubelet\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955427 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-etc-openvswitch\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955449 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-run-ovn\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955505 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-etc-openvswitch\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955534 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-kubelet\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955571 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-run-ovn-kubernetes\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955643 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-cni-bin\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955679 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-log-socket\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955695 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-cni-bin\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955714 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-run-ovn\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955701 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f57a2f4-eb53-4719-9d6f-37672879f34c-ovn-node-metrics-cert\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955763 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-log-socket\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955798 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1f57a2f4-eb53-4719-9d6f-37672879f34c-ovnkube-script-lib\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955645 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-run-ovn-kubernetes\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955845 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955882 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f57a2f4-eb53-4719-9d6f-37672879f34c-ovnkube-config\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955924 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955920 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-cni-netd\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955962 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-systemd-units\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.955986 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-run-netns\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956016 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-run-systemd\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956033 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-run-openvswitch\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956058 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f57a2f4-eb53-4719-9d6f-37672879f34c-env-overrides\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956096 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wchpv\" (UniqueName: \"kubernetes.io/projected/1f57a2f4-eb53-4719-9d6f-37672879f34c-kube-api-access-wchpv\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956090 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-run-netns\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956129 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-slash\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956152 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-var-lib-openvswitch\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956158 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-run-openvswitch\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956167 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-node-log\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956185 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-node-log\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956292 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-run-systemd\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956339 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-slash\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956687 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-systemd-units\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956692 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-var-lib-openvswitch\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956787 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f57a2f4-eb53-4719-9d6f-37672879f34c-host-cni-netd\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.956915 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f57a2f4-eb53-4719-9d6f-37672879f34c-env-overrides\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.957095 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1f57a2f4-eb53-4719-9d6f-37672879f34c-ovnkube-script-lib\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.957162 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f57a2f4-eb53-4719-9d6f-37672879f34c-ovnkube-config\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.958628 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f57a2f4-eb53-4719-9d6f-37672879f34c-ovn-node-metrics-cert\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:56 crc kubenswrapper[5119]: I0220 00:20:56.972207 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wchpv\" (UniqueName: \"kubernetes.io/projected/1f57a2f4-eb53-4719-9d6f-37672879f34c-kube-api-access-wchpv\") pod \"ovnkube-node-f78hk\" (UID: \"1f57a2f4-eb53-4719-9d6f-37672879f34c\") " pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.079419 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:20:57 crc kubenswrapper[5119]: W0220 00:20:57.099049 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f57a2f4_eb53_4719_9d6f_37672879f34c.slice/crio-ca94cbf1fe90adc4837580696d322b5242d4b77e2fdc0075a4016ab6536ed3cc WatchSource:0}: Error finding container ca94cbf1fe90adc4837580696d322b5242d4b77e2fdc0075a4016ab6536ed3cc: Status 404 returned error can't find the container with id ca94cbf1fe90adc4837580696d322b5242d4b77e2fdc0075a4016ab6536ed3cc Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.194611 5119 generic.go:358] "Generic (PLEG): container finished" podID="a9bc4f8d-447f-4dd2-a865-6fd066513b13" containerID="2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c" exitCode=0 Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.194991 5119 generic.go:358] "Generic (PLEG): container finished" podID="a9bc4f8d-447f-4dd2-a865-6fd066513b13" containerID="22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b" exitCode=0 Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.195221 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.195277 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" event={"ID":"a9bc4f8d-447f-4dd2-a865-6fd066513b13","Type":"ContainerDied","Data":"2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.195347 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" event={"ID":"a9bc4f8d-447f-4dd2-a865-6fd066513b13","Type":"ContainerDied","Data":"22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.195363 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj" event={"ID":"a9bc4f8d-447f-4dd2-a865-6fd066513b13","Type":"ContainerDied","Data":"50d5426bd1c1e4bb6a8ce1a0186057730660b09857e6fa7a9c11e2c8ce105d81"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.195384 5119 scope.go:117] "RemoveContainer" containerID="2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.200115 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m42rs_4cd7895c-04ad-4f77-80dc-54fd6ed54e89/ovn-acl-logging/0.log" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.200452 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m42rs_4cd7895c-04ad-4f77-80dc-54fd6ed54e89/ovn-controller/0.log" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201272 5119 generic.go:358] "Generic (PLEG): container finished" podID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerID="674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb" exitCode=0 Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201290 5119 generic.go:358] "Generic (PLEG): container finished" podID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerID="b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e" exitCode=0 Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201297 5119 generic.go:358] "Generic (PLEG): container finished" podID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerID="f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47" exitCode=0 Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201303 5119 generic.go:358] "Generic (PLEG): container finished" podID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerID="ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd" exitCode=0 Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201309 5119 generic.go:358] "Generic (PLEG): container finished" podID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerID="8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a" exitCode=0 Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201315 5119 generic.go:358] "Generic (PLEG): container finished" podID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerID="010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b" exitCode=0 Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201320 5119 generic.go:358] "Generic (PLEG): container finished" podID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerID="3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b" exitCode=143 Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201326 5119 generic.go:358] "Generic (PLEG): container finished" podID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" containerID="cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218" exitCode=143 Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201407 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerDied","Data":"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201430 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerDied","Data":"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201446 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerDied","Data":"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201455 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerDied","Data":"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201464 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerDied","Data":"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201475 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerDied","Data":"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201487 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201495 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201501 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201506 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201510 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201515 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201519 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201525 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201530 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201470 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201553 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerDied","Data":"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201763 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201786 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201795 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201802 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201808 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201815 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201830 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201836 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201842 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201860 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerDied","Data":"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201874 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201881 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201887 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201893 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201898 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201903 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201908 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201913 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201918 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201926 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m42rs" event={"ID":"4cd7895c-04ad-4f77-80dc-54fd6ed54e89","Type":"ContainerDied","Data":"f5b8dd6de490c18ed3c78de138e433e9027403c6809d8f85b0e22bc24620f10a"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201935 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201941 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201946 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201951 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201959 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201964 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201969 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201974 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.201979 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.206784 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlzxr_e24ea4b0-1a34-4fb3-b40c-684c03795e07/kube-multus/0.log" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.206825 5119 generic.go:358] "Generic (PLEG): container finished" podID="e24ea4b0-1a34-4fb3-b40c-684c03795e07" containerID="bff4e744bb7114819286cb3231d0747843a1d7f8308e2439bc6b2ed6e66a9ca9" exitCode=2 Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.207756 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlzxr" event={"ID":"e24ea4b0-1a34-4fb3-b40c-684c03795e07","Type":"ContainerDied","Data":"bff4e744bb7114819286cb3231d0747843a1d7f8308e2439bc6b2ed6e66a9ca9"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.208405 5119 scope.go:117] "RemoveContainer" containerID="bff4e744bb7114819286cb3231d0747843a1d7f8308e2439bc6b2ed6e66a9ca9" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.210475 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" event={"ID":"1f57a2f4-eb53-4719-9d6f-37672879f34c","Type":"ContainerStarted","Data":"ca94cbf1fe90adc4837580696d322b5242d4b77e2fdc0075a4016ab6536ed3cc"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.216294 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj"] Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.221131 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9qqjj"] Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.226782 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" event={"ID":"c57e2d39-d756-4d66-85f5-d5b578e412e8","Type":"ContainerStarted","Data":"9a0030c987cb89a459cdc16d904a70865cafb118b54d182606f03335984c20d9"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.226836 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" event={"ID":"c57e2d39-d756-4d66-85f5-d5b578e412e8","Type":"ContainerStarted","Data":"236c8d1eed2a6faa87f019408639e5a47aedcc97492ff9e6ee803be149414aaf"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.226849 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" event={"ID":"c57e2d39-d756-4d66-85f5-d5b578e412e8","Type":"ContainerStarted","Data":"24108eb204c82f3751d7a8b1ccbc3b2dd300aadafdf136c6c1375cc2fe287b6c"} Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.288296 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-tqh6q" podStartSLOduration=1.288279392 podStartE2EDuration="1.288279392s" podCreationTimestamp="2026-02-20 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:20:57.283870414 +0000 UTC m=+639.262834726" watchObservedRunningTime="2026-02-20 00:20:57.288279392 +0000 UTC m=+639.267243694" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.316190 5119 scope.go:117] "RemoveContainer" containerID="22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.327110 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m42rs"] Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.330996 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m42rs"] Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.333989 5119 scope.go:117] "RemoveContainer" containerID="2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c" Feb 20 00:20:57 crc kubenswrapper[5119]: E0220 00:20:57.334372 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c\": container with ID starting with 2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c not found: ID does not exist" containerID="2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.334413 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c"} err="failed to get container status \"2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c\": rpc error: code = NotFound desc = could not find container \"2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c\": container with ID starting with 2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.334436 5119 scope.go:117] "RemoveContainer" containerID="22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b" Feb 20 00:20:57 crc kubenswrapper[5119]: E0220 00:20:57.334683 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b\": container with ID starting with 22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b not found: ID does not exist" containerID="22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.334725 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b"} err="failed to get container status \"22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b\": rpc error: code = NotFound desc = could not find container \"22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b\": container with ID starting with 22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.334743 5119 scope.go:117] "RemoveContainer" containerID="2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.335017 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c"} err="failed to get container status \"2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c\": rpc error: code = NotFound desc = could not find container \"2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c\": container with ID starting with 2c3457a4f0c4fb5545204589c52f65cbcfa555040c04e5d8057aa98c7adf983c not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.335041 5119 scope.go:117] "RemoveContainer" containerID="22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.335229 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b"} err="failed to get container status \"22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b\": rpc error: code = NotFound desc = could not find container \"22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b\": container with ID starting with 22d144085753de360f7f333957476990e604e3f71d125d60b727e26f35243f5b not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.335251 5119 scope.go:117] "RemoveContainer" containerID="674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.359642 5119 scope.go:117] "RemoveContainer" containerID="b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.372239 5119 scope.go:117] "RemoveContainer" containerID="f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.388516 5119 scope.go:117] "RemoveContainer" containerID="ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.410036 5119 scope.go:117] "RemoveContainer" containerID="8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.422751 5119 scope.go:117] "RemoveContainer" containerID="010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.437273 5119 scope.go:117] "RemoveContainer" containerID="3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.456375 5119 scope.go:117] "RemoveContainer" containerID="cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.467694 5119 scope.go:117] "RemoveContainer" containerID="d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.478421 5119 scope.go:117] "RemoveContainer" containerID="674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb" Feb 20 00:20:57 crc kubenswrapper[5119]: E0220 00:20:57.478780 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb\": container with ID starting with 674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb not found: ID does not exist" containerID="674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.478823 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb"} err="failed to get container status \"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb\": rpc error: code = NotFound desc = could not find container \"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb\": container with ID starting with 674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.478847 5119 scope.go:117] "RemoveContainer" containerID="b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e" Feb 20 00:20:57 crc kubenswrapper[5119]: E0220 00:20:57.479089 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e\": container with ID starting with b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e not found: ID does not exist" containerID="b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.479127 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e"} err="failed to get container status \"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e\": rpc error: code = NotFound desc = could not find container \"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e\": container with ID starting with b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.479154 5119 scope.go:117] "RemoveContainer" containerID="f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47" Feb 20 00:20:57 crc kubenswrapper[5119]: E0220 00:20:57.479393 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47\": container with ID starting with f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47 not found: ID does not exist" containerID="f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.479421 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47"} err="failed to get container status \"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47\": rpc error: code = NotFound desc = could not find container \"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47\": container with ID starting with f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47 not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.479438 5119 scope.go:117] "RemoveContainer" containerID="ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd" Feb 20 00:20:57 crc kubenswrapper[5119]: E0220 00:20:57.479649 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd\": container with ID starting with ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd not found: ID does not exist" containerID="ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.479669 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd"} err="failed to get container status \"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd\": rpc error: code = NotFound desc = could not find container \"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd\": container with ID starting with ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.479680 5119 scope.go:117] "RemoveContainer" containerID="8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a" Feb 20 00:20:57 crc kubenswrapper[5119]: E0220 00:20:57.479833 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a\": container with ID starting with 8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a not found: ID does not exist" containerID="8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.479850 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a"} err="failed to get container status \"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a\": rpc error: code = NotFound desc = could not find container \"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a\": container with ID starting with 8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.479863 5119 scope.go:117] "RemoveContainer" containerID="010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b" Feb 20 00:20:57 crc kubenswrapper[5119]: E0220 00:20:57.480019 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b\": container with ID starting with 010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b not found: ID does not exist" containerID="010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.480037 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b"} err="failed to get container status \"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b\": rpc error: code = NotFound desc = could not find container \"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b\": container with ID starting with 010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.480049 5119 scope.go:117] "RemoveContainer" containerID="3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b" Feb 20 00:20:57 crc kubenswrapper[5119]: E0220 00:20:57.480205 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b\": container with ID starting with 3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b not found: ID does not exist" containerID="3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.480220 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b"} err="failed to get container status \"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b\": rpc error: code = NotFound desc = could not find container \"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b\": container with ID starting with 3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.480231 5119 scope.go:117] "RemoveContainer" containerID="cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218" Feb 20 00:20:57 crc kubenswrapper[5119]: E0220 00:20:57.480385 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218\": container with ID starting with cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218 not found: ID does not exist" containerID="cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.480408 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218"} err="failed to get container status \"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218\": rpc error: code = NotFound desc = could not find container \"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218\": container with ID starting with cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218 not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.480428 5119 scope.go:117] "RemoveContainer" containerID="d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362" Feb 20 00:20:57 crc kubenswrapper[5119]: E0220 00:20:57.480656 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362\": container with ID starting with d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362 not found: ID does not exist" containerID="d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.480673 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362"} err="failed to get container status \"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362\": rpc error: code = NotFound desc = could not find container \"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362\": container with ID starting with d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362 not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.480685 5119 scope.go:117] "RemoveContainer" containerID="674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.480839 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb"} err="failed to get container status \"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb\": rpc error: code = NotFound desc = could not find container \"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb\": container with ID starting with 674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.480855 5119 scope.go:117] "RemoveContainer" containerID="b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.481018 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e"} err="failed to get container status \"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e\": rpc error: code = NotFound desc = could not find container \"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e\": container with ID starting with b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.481032 5119 scope.go:117] "RemoveContainer" containerID="f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.481204 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47"} err="failed to get container status \"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47\": rpc error: code = NotFound desc = could not find container \"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47\": container with ID starting with f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47 not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.481221 5119 scope.go:117] "RemoveContainer" containerID="ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.481372 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd"} err="failed to get container status \"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd\": rpc error: code = NotFound desc = could not find container \"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd\": container with ID starting with ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.481386 5119 scope.go:117] "RemoveContainer" containerID="8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.481594 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a"} err="failed to get container status \"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a\": rpc error: code = NotFound desc = could not find container \"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a\": container with ID starting with 8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.481612 5119 scope.go:117] "RemoveContainer" containerID="010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.481842 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b"} err="failed to get container status \"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b\": rpc error: code = NotFound desc = could not find container \"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b\": container with ID starting with 010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.481858 5119 scope.go:117] "RemoveContainer" containerID="3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482012 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b"} err="failed to get container status \"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b\": rpc error: code = NotFound desc = could not find container \"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b\": container with ID starting with 3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482029 5119 scope.go:117] "RemoveContainer" containerID="cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482200 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218"} err="failed to get container status \"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218\": rpc error: code = NotFound desc = could not find container \"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218\": container with ID starting with cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218 not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482230 5119 scope.go:117] "RemoveContainer" containerID="d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482368 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362"} err="failed to get container status \"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362\": rpc error: code = NotFound desc = could not find container \"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362\": container with ID starting with d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362 not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482398 5119 scope.go:117] "RemoveContainer" containerID="674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482563 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb"} err="failed to get container status \"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb\": rpc error: code = NotFound desc = could not find container \"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb\": container with ID starting with 674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482585 5119 scope.go:117] "RemoveContainer" containerID="b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482736 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e"} err="failed to get container status \"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e\": rpc error: code = NotFound desc = could not find container \"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e\": container with ID starting with b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482753 5119 scope.go:117] "RemoveContainer" containerID="f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482893 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47"} err="failed to get container status \"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47\": rpc error: code = NotFound desc = could not find container \"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47\": container with ID starting with f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47 not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.482907 5119 scope.go:117] "RemoveContainer" containerID="ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483021 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd"} err="failed to get container status \"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd\": rpc error: code = NotFound desc = could not find container \"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd\": container with ID starting with ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483052 5119 scope.go:117] "RemoveContainer" containerID="8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483170 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a"} err="failed to get container status \"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a\": rpc error: code = NotFound desc = could not find container \"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a\": container with ID starting with 8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483201 5119 scope.go:117] "RemoveContainer" containerID="010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483316 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b"} err="failed to get container status \"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b\": rpc error: code = NotFound desc = could not find container \"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b\": container with ID starting with 010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483330 5119 scope.go:117] "RemoveContainer" containerID="3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483465 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b"} err="failed to get container status \"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b\": rpc error: code = NotFound desc = could not find container \"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b\": container with ID starting with 3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483480 5119 scope.go:117] "RemoveContainer" containerID="cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483777 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218"} err="failed to get container status \"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218\": rpc error: code = NotFound desc = could not find container \"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218\": container with ID starting with cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218 not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483794 5119 scope.go:117] "RemoveContainer" containerID="d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483938 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362"} err="failed to get container status \"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362\": rpc error: code = NotFound desc = could not find container \"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362\": container with ID starting with d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362 not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.483952 5119 scope.go:117] "RemoveContainer" containerID="674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484071 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb"} err="failed to get container status \"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb\": rpc error: code = NotFound desc = could not find container \"674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb\": container with ID starting with 674e3a18aa6640620ba6fdf66591c4a1336a25e453fec49103eb909474410eeb not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484086 5119 scope.go:117] "RemoveContainer" containerID="b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484201 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e"} err="failed to get container status \"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e\": rpc error: code = NotFound desc = could not find container \"b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e\": container with ID starting with b101d9f5d12b4867f4ba317a753a65bcafe33847b2dae59cd35cb28cc7e6de1e not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484216 5119 scope.go:117] "RemoveContainer" containerID="f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484353 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47"} err="failed to get container status \"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47\": rpc error: code = NotFound desc = could not find container \"f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47\": container with ID starting with f1cf6bcabe925fa0b700187d152fb8ea00cc7122584c5f168de69e3056b1ad47 not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484384 5119 scope.go:117] "RemoveContainer" containerID="ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484511 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd"} err="failed to get container status \"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd\": rpc error: code = NotFound desc = could not find container \"ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd\": container with ID starting with ea1177449dbff5bf83a80e0e1c1c904b8d0e8a49636d545ececeab53092cf0bd not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484525 5119 scope.go:117] "RemoveContainer" containerID="8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484672 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a"} err="failed to get container status \"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a\": rpc error: code = NotFound desc = could not find container \"8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a\": container with ID starting with 8a86587489c1b3ffd89f071740c2c5d02db2ffb946f49d1af0b38f8ae898a27a not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484688 5119 scope.go:117] "RemoveContainer" containerID="010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484804 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b"} err="failed to get container status \"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b\": rpc error: code = NotFound desc = could not find container \"010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b\": container with ID starting with 010fc7c73cfdca35805bce1cd3d1786241474be2d14503be35fbfbb2177ee16b not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484820 5119 scope.go:117] "RemoveContainer" containerID="3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484934 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b"} err="failed to get container status \"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b\": rpc error: code = NotFound desc = could not find container \"3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b\": container with ID starting with 3a0be49b620dcfc842e7ca8aac597ac2d16bc7ad28285c513f2c9a75c5c4293b not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.484949 5119 scope.go:117] "RemoveContainer" containerID="cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.485065 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218"} err="failed to get container status \"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218\": rpc error: code = NotFound desc = could not find container \"cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218\": container with ID starting with cb91d3cd6a9e545d0d9c5e6390d21b2921456d7cd6754b81f19679866437b218 not found: ID does not exist" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.485078 5119 scope.go:117] "RemoveContainer" containerID="d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362" Feb 20 00:20:57 crc kubenswrapper[5119]: I0220 00:20:57.485190 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362"} err="failed to get container status \"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362\": rpc error: code = NotFound desc = could not find container \"d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362\": container with ID starting with d9a0ed662d190625d64004072e406885ff260b901839dcf84a987faabb97d362 not found: ID does not exist" Feb 20 00:20:58 crc kubenswrapper[5119]: I0220 00:20:58.244263 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlzxr_e24ea4b0-1a34-4fb3-b40c-684c03795e07/kube-multus/0.log" Feb 20 00:20:58 crc kubenswrapper[5119]: I0220 00:20:58.244846 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rlzxr" event={"ID":"e24ea4b0-1a34-4fb3-b40c-684c03795e07","Type":"ContainerStarted","Data":"50127bccf0c5f6d1c53bad0e6d39c43b5f62c95a6e3fb10dd1d26123906ad9af"} Feb 20 00:20:58 crc kubenswrapper[5119]: I0220 00:20:58.248518 5119 generic.go:358] "Generic (PLEG): container finished" podID="1f57a2f4-eb53-4719-9d6f-37672879f34c" containerID="707c472504423becc56bb00516439a224a9ec1d217ebe19c92777db53672f9db" exitCode=0 Feb 20 00:20:58 crc kubenswrapper[5119]: I0220 00:20:58.248662 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" event={"ID":"1f57a2f4-eb53-4719-9d6f-37672879f34c","Type":"ContainerDied","Data":"707c472504423becc56bb00516439a224a9ec1d217ebe19c92777db53672f9db"} Feb 20 00:20:58 crc kubenswrapper[5119]: I0220 00:20:58.880145 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd7895c-04ad-4f77-80dc-54fd6ed54e89" path="/var/lib/kubelet/pods/4cd7895c-04ad-4f77-80dc-54fd6ed54e89/volumes" Feb 20 00:20:58 crc kubenswrapper[5119]: I0220 00:20:58.882431 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9bc4f8d-447f-4dd2-a865-6fd066513b13" path="/var/lib/kubelet/pods/a9bc4f8d-447f-4dd2-a865-6fd066513b13/volumes" Feb 20 00:20:59 crc kubenswrapper[5119]: I0220 00:20:59.264824 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" event={"ID":"1f57a2f4-eb53-4719-9d6f-37672879f34c","Type":"ContainerStarted","Data":"bb1f286c3ed1f84485d53b55e0455ef25947aa0aa1437246ab0f7a286419aa82"} Feb 20 00:20:59 crc kubenswrapper[5119]: I0220 00:20:59.264897 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" event={"ID":"1f57a2f4-eb53-4719-9d6f-37672879f34c","Type":"ContainerStarted","Data":"555e39abf3e3cd34473738a06918bcb8513420482d946788d93f6b73c773fac0"} Feb 20 00:20:59 crc kubenswrapper[5119]: I0220 00:20:59.264935 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" event={"ID":"1f57a2f4-eb53-4719-9d6f-37672879f34c","Type":"ContainerStarted","Data":"0b77c315a4dd56aa6adaaf40f915b181a6751489a57c87818567b70d0de477fc"} Feb 20 00:20:59 crc kubenswrapper[5119]: I0220 00:20:59.264970 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" event={"ID":"1f57a2f4-eb53-4719-9d6f-37672879f34c","Type":"ContainerStarted","Data":"024a50eaa9f33cb46dfdc970ba7da1c41b29b467f7718c09374cbbe77fe17865"} Feb 20 00:20:59 crc kubenswrapper[5119]: I0220 00:20:59.264994 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" event={"ID":"1f57a2f4-eb53-4719-9d6f-37672879f34c","Type":"ContainerStarted","Data":"80894155876a3405bba34cd9249ac48ee9562f4bb770e828f1cf283d2238ff02"} Feb 20 00:20:59 crc kubenswrapper[5119]: I0220 00:20:59.265010 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" event={"ID":"1f57a2f4-eb53-4719-9d6f-37672879f34c","Type":"ContainerStarted","Data":"ac56dcab43cbd19a8a48ace35f56697609e17102d88cf4d3d343f6880ff1e98d"} Feb 20 00:21:02 crc kubenswrapper[5119]: I0220 00:21:02.298656 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" event={"ID":"1f57a2f4-eb53-4719-9d6f-37672879f34c","Type":"ContainerStarted","Data":"953808c99187bb84c5bf965bd935924474aec00a9898597ea098f8072f49c5d7"} Feb 20 00:21:04 crc kubenswrapper[5119]: I0220 00:21:04.315038 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" event={"ID":"1f57a2f4-eb53-4719-9d6f-37672879f34c","Type":"ContainerStarted","Data":"51fbeaa5975605edf3604520354915a4cc2782c58383fae3ec99288751fbc06c"} Feb 20 00:21:04 crc kubenswrapper[5119]: I0220 00:21:04.316076 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:21:04 crc kubenswrapper[5119]: I0220 00:21:04.316234 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:21:04 crc kubenswrapper[5119]: I0220 00:21:04.345745 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:21:04 crc kubenswrapper[5119]: I0220 00:21:04.365081 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" podStartSLOduration=8.365059936 podStartE2EDuration="8.365059936s" podCreationTimestamp="2026-02-20 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:21:04.348239267 +0000 UTC m=+646.327203579" watchObservedRunningTime="2026-02-20 00:21:04.365059936 +0000 UTC m=+646.344024228" Feb 20 00:21:05 crc kubenswrapper[5119]: I0220 00:21:05.323240 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:21:05 crc kubenswrapper[5119]: I0220 00:21:05.359725 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:21:37 crc kubenswrapper[5119]: I0220 00:21:37.371211 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f78hk" Feb 20 00:22:00 crc kubenswrapper[5119]: I0220 00:22:00.153535 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525782-7hlsv"] Feb 20 00:22:00 crc kubenswrapper[5119]: I0220 00:22:00.182637 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525782-7hlsv"] Feb 20 00:22:00 crc kubenswrapper[5119]: I0220 00:22:00.182929 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525782-7hlsv" Feb 20 00:22:00 crc kubenswrapper[5119]: I0220 00:22:00.186879 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-nmc85\"" Feb 20 00:22:00 crc kubenswrapper[5119]: I0220 00:22:00.189088 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:22:00 crc kubenswrapper[5119]: I0220 00:22:00.189518 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:22:00 crc kubenswrapper[5119]: I0220 00:22:00.323773 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rflw5\" (UniqueName: \"kubernetes.io/projected/12d846a4-18b9-4b41-b783-4f7282c82065-kube-api-access-rflw5\") pod \"auto-csr-approver-29525782-7hlsv\" (UID: \"12d846a4-18b9-4b41-b783-4f7282c82065\") " pod="openshift-infra/auto-csr-approver-29525782-7hlsv" Feb 20 00:22:00 crc kubenswrapper[5119]: I0220 00:22:00.425763 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rflw5\" (UniqueName: \"kubernetes.io/projected/12d846a4-18b9-4b41-b783-4f7282c82065-kube-api-access-rflw5\") pod \"auto-csr-approver-29525782-7hlsv\" (UID: \"12d846a4-18b9-4b41-b783-4f7282c82065\") " pod="openshift-infra/auto-csr-approver-29525782-7hlsv" Feb 20 00:22:00 crc kubenswrapper[5119]: I0220 00:22:00.461338 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rflw5\" (UniqueName: \"kubernetes.io/projected/12d846a4-18b9-4b41-b783-4f7282c82065-kube-api-access-rflw5\") pod \"auto-csr-approver-29525782-7hlsv\" (UID: \"12d846a4-18b9-4b41-b783-4f7282c82065\") " pod="openshift-infra/auto-csr-approver-29525782-7hlsv" Feb 20 00:22:00 crc kubenswrapper[5119]: I0220 00:22:00.520885 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525782-7hlsv" Feb 20 00:22:00 crc kubenswrapper[5119]: I0220 00:22:00.823767 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525782-7hlsv"] Feb 20 00:22:01 crc kubenswrapper[5119]: I0220 00:22:01.719080 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525782-7hlsv" event={"ID":"12d846a4-18b9-4b41-b783-4f7282c82065","Type":"ContainerStarted","Data":"289878fde4a304150a63fc8db457ef4451c6d9c31b6fb635d3eb5fa99018124c"} Feb 20 00:22:02 crc kubenswrapper[5119]: I0220 00:22:02.729311 5119 generic.go:358] "Generic (PLEG): container finished" podID="12d846a4-18b9-4b41-b783-4f7282c82065" containerID="15ae00ee047d1d2da4daa48c50d0f04db0402b5947e6228eb79aa74c73f808bb" exitCode=0 Feb 20 00:22:02 crc kubenswrapper[5119]: I0220 00:22:02.729431 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525782-7hlsv" event={"ID":"12d846a4-18b9-4b41-b783-4f7282c82065","Type":"ContainerDied","Data":"15ae00ee047d1d2da4daa48c50d0f04db0402b5947e6228eb79aa74c73f808bb"} Feb 20 00:22:04 crc kubenswrapper[5119]: I0220 00:22:04.035213 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525782-7hlsv" Feb 20 00:22:04 crc kubenswrapper[5119]: I0220 00:22:04.175728 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rflw5\" (UniqueName: \"kubernetes.io/projected/12d846a4-18b9-4b41-b783-4f7282c82065-kube-api-access-rflw5\") pod \"12d846a4-18b9-4b41-b783-4f7282c82065\" (UID: \"12d846a4-18b9-4b41-b783-4f7282c82065\") " Feb 20 00:22:04 crc kubenswrapper[5119]: I0220 00:22:04.186225 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d846a4-18b9-4b41-b783-4f7282c82065-kube-api-access-rflw5" (OuterVolumeSpecName: "kube-api-access-rflw5") pod "12d846a4-18b9-4b41-b783-4f7282c82065" (UID: "12d846a4-18b9-4b41-b783-4f7282c82065"). InnerVolumeSpecName "kube-api-access-rflw5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:22:04 crc kubenswrapper[5119]: I0220 00:22:04.277991 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rflw5\" (UniqueName: \"kubernetes.io/projected/12d846a4-18b9-4b41-b783-4f7282c82065-kube-api-access-rflw5\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:04 crc kubenswrapper[5119]: I0220 00:22:04.747552 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525782-7hlsv" event={"ID":"12d846a4-18b9-4b41-b783-4f7282c82065","Type":"ContainerDied","Data":"289878fde4a304150a63fc8db457ef4451c6d9c31b6fb635d3eb5fa99018124c"} Feb 20 00:22:04 crc kubenswrapper[5119]: I0220 00:22:04.747585 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525782-7hlsv" Feb 20 00:22:04 crc kubenswrapper[5119]: I0220 00:22:04.747615 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289878fde4a304150a63fc8db457ef4451c6d9c31b6fb635d3eb5fa99018124c" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.191690 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cz9b"] Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.192741 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8cz9b" podUID="b2314a6d-2b65-453e-8296-608f8e488ff4" containerName="registry-server" containerID="cri-o://6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c" gracePeriod=30 Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.558130 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.749454 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdlkb\" (UniqueName: \"kubernetes.io/projected/b2314a6d-2b65-453e-8296-608f8e488ff4-kube-api-access-rdlkb\") pod \"b2314a6d-2b65-453e-8296-608f8e488ff4\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.749568 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-utilities\") pod \"b2314a6d-2b65-453e-8296-608f8e488ff4\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.749756 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-catalog-content\") pod \"b2314a6d-2b65-453e-8296-608f8e488ff4\" (UID: \"b2314a6d-2b65-453e-8296-608f8e488ff4\") " Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.751719 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-utilities" (OuterVolumeSpecName: "utilities") pod "b2314a6d-2b65-453e-8296-608f8e488ff4" (UID: "b2314a6d-2b65-453e-8296-608f8e488ff4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.759130 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2314a6d-2b65-453e-8296-608f8e488ff4-kube-api-access-rdlkb" (OuterVolumeSpecName: "kube-api-access-rdlkb") pod "b2314a6d-2b65-453e-8296-608f8e488ff4" (UID: "b2314a6d-2b65-453e-8296-608f8e488ff4"). InnerVolumeSpecName "kube-api-access-rdlkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.763998 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2314a6d-2b65-453e-8296-608f8e488ff4" (UID: "b2314a6d-2b65-453e-8296-608f8e488ff4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.779064 5119 generic.go:358] "Generic (PLEG): container finished" podID="b2314a6d-2b65-453e-8296-608f8e488ff4" containerID="6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c" exitCode=0 Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.779355 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cz9b" event={"ID":"b2314a6d-2b65-453e-8296-608f8e488ff4","Type":"ContainerDied","Data":"6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c"} Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.779488 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cz9b" event={"ID":"b2314a6d-2b65-453e-8296-608f8e488ff4","Type":"ContainerDied","Data":"b2c8e43435da37375915d2bd96e4cc56e2fde695ab1e83c91165076e87a3911b"} Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.779648 5119 scope.go:117] "RemoveContainer" containerID="6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.779934 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cz9b" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.810757 5119 scope.go:117] "RemoveContainer" containerID="b1013ec928968b736564ba0a819b50c515a62ddd46f794bb1bc5d0833444d86d" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.825618 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cz9b"] Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.833319 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cz9b"] Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.835368 5119 scope.go:117] "RemoveContainer" containerID="ed318611ba2205bc2c77138cb5a85105ad1f0e81b9daafad7a2e5f1ece4e7569" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.852044 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rdlkb\" (UniqueName: \"kubernetes.io/projected/b2314a6d-2b65-453e-8296-608f8e488ff4-kube-api-access-rdlkb\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.852082 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.852096 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2314a6d-2b65-453e-8296-608f8e488ff4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.855291 5119 scope.go:117] "RemoveContainer" containerID="6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c" Feb 20 00:22:08 crc kubenswrapper[5119]: E0220 00:22:08.855855 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c\": container with ID starting with 6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c not found: ID does not exist" containerID="6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.855919 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c"} err="failed to get container status \"6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c\": rpc error: code = NotFound desc = could not find container \"6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c\": container with ID starting with 6373cebe6b8769499a6737ddef6649b5f286d004b35763bbf4f25e90e92e1a8c not found: ID does not exist" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.855954 5119 scope.go:117] "RemoveContainer" containerID="b1013ec928968b736564ba0a819b50c515a62ddd46f794bb1bc5d0833444d86d" Feb 20 00:22:08 crc kubenswrapper[5119]: E0220 00:22:08.856525 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1013ec928968b736564ba0a819b50c515a62ddd46f794bb1bc5d0833444d86d\": container with ID starting with b1013ec928968b736564ba0a819b50c515a62ddd46f794bb1bc5d0833444d86d not found: ID does not exist" containerID="b1013ec928968b736564ba0a819b50c515a62ddd46f794bb1bc5d0833444d86d" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.856606 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1013ec928968b736564ba0a819b50c515a62ddd46f794bb1bc5d0833444d86d"} err="failed to get container status \"b1013ec928968b736564ba0a819b50c515a62ddd46f794bb1bc5d0833444d86d\": rpc error: code = NotFound desc = could not find container \"b1013ec928968b736564ba0a819b50c515a62ddd46f794bb1bc5d0833444d86d\": container with ID starting with b1013ec928968b736564ba0a819b50c515a62ddd46f794bb1bc5d0833444d86d not found: ID does not exist" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.856640 5119 scope.go:117] "RemoveContainer" containerID="ed318611ba2205bc2c77138cb5a85105ad1f0e81b9daafad7a2e5f1ece4e7569" Feb 20 00:22:08 crc kubenswrapper[5119]: E0220 00:22:08.856922 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed318611ba2205bc2c77138cb5a85105ad1f0e81b9daafad7a2e5f1ece4e7569\": container with ID starting with ed318611ba2205bc2c77138cb5a85105ad1f0e81b9daafad7a2e5f1ece4e7569 not found: ID does not exist" containerID="ed318611ba2205bc2c77138cb5a85105ad1f0e81b9daafad7a2e5f1ece4e7569" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.856957 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed318611ba2205bc2c77138cb5a85105ad1f0e81b9daafad7a2e5f1ece4e7569"} err="failed to get container status \"ed318611ba2205bc2c77138cb5a85105ad1f0e81b9daafad7a2e5f1ece4e7569\": rpc error: code = NotFound desc = could not find container \"ed318611ba2205bc2c77138cb5a85105ad1f0e81b9daafad7a2e5f1ece4e7569\": container with ID starting with ed318611ba2205bc2c77138cb5a85105ad1f0e81b9daafad7a2e5f1ece4e7569 not found: ID does not exist" Feb 20 00:22:08 crc kubenswrapper[5119]: I0220 00:22:08.868299 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2314a6d-2b65-453e-8296-608f8e488ff4" path="/var/lib/kubelet/pods/b2314a6d-2b65-453e-8296-608f8e488ff4/volumes" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.613911 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8g2zx"] Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.615165 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2314a6d-2b65-453e-8296-608f8e488ff4" containerName="registry-server" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.615200 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2314a6d-2b65-453e-8296-608f8e488ff4" containerName="registry-server" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.615221 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="12d846a4-18b9-4b41-b783-4f7282c82065" containerName="oc" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.615237 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d846a4-18b9-4b41-b783-4f7282c82065" containerName="oc" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.615276 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2314a6d-2b65-453e-8296-608f8e488ff4" containerName="extract-content" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.615292 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2314a6d-2b65-453e-8296-608f8e488ff4" containerName="extract-content" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.615338 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2314a6d-2b65-453e-8296-608f8e488ff4" containerName="extract-utilities" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.615356 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2314a6d-2b65-453e-8296-608f8e488ff4" containerName="extract-utilities" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.615646 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="12d846a4-18b9-4b41-b783-4f7282c82065" containerName="oc" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.615684 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="b2314a6d-2b65-453e-8296-608f8e488ff4" containerName="registry-server" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.634464 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.642986 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8g2zx"] Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.765114 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-utilities\") pod \"certified-operators-8g2zx\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.765166 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2ttq\" (UniqueName: \"kubernetes.io/projected/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-kube-api-access-l2ttq\") pod \"certified-operators-8g2zx\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.765286 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-catalog-content\") pod \"certified-operators-8g2zx\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.866263 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-utilities\") pod \"certified-operators-8g2zx\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.866320 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l2ttq\" (UniqueName: \"kubernetes.io/projected/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-kube-api-access-l2ttq\") pod \"certified-operators-8g2zx\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.866390 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-catalog-content\") pod \"certified-operators-8g2zx\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.866961 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-catalog-content\") pod \"certified-operators-8g2zx\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.866956 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-utilities\") pod \"certified-operators-8g2zx\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.889910 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2ttq\" (UniqueName: \"kubernetes.io/projected/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-kube-api-access-l2ttq\") pod \"certified-operators-8g2zx\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:09 crc kubenswrapper[5119]: I0220 00:22:09.952074 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:10 crc kubenswrapper[5119]: I0220 00:22:10.427399 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8g2zx"] Feb 20 00:22:10 crc kubenswrapper[5119]: I0220 00:22:10.797119 5119 generic.go:358] "Generic (PLEG): container finished" podID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" containerID="498fdac09ff07df54c57a65e5f54d6d82c612a5a3dbed8eb05fc44f6e7d1659a" exitCode=0 Feb 20 00:22:10 crc kubenswrapper[5119]: I0220 00:22:10.797218 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2zx" event={"ID":"00dbc3bd-a09d-48cd-98b6-3543bae44d2d","Type":"ContainerDied","Data":"498fdac09ff07df54c57a65e5f54d6d82c612a5a3dbed8eb05fc44f6e7d1659a"} Feb 20 00:22:10 crc kubenswrapper[5119]: I0220 00:22:10.797246 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2zx" event={"ID":"00dbc3bd-a09d-48cd-98b6-3543bae44d2d","Type":"ContainerStarted","Data":"0310a08739543e2ca9104fb8eca2a9f69294a9b5d2b5ddedddd5f628bf83d53d"} Feb 20 00:22:11 crc kubenswrapper[5119]: I0220 00:22:11.807212 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2zx" event={"ID":"00dbc3bd-a09d-48cd-98b6-3543bae44d2d","Type":"ContainerStarted","Data":"660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe"} Feb 20 00:22:12 crc kubenswrapper[5119]: I0220 00:22:12.827507 5119 generic.go:358] "Generic (PLEG): container finished" podID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" containerID="660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe" exitCode=0 Feb 20 00:22:12 crc kubenswrapper[5119]: I0220 00:22:12.827757 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2zx" event={"ID":"00dbc3bd-a09d-48cd-98b6-3543bae44d2d","Type":"ContainerDied","Data":"660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe"} Feb 20 00:22:13 crc kubenswrapper[5119]: I0220 00:22:13.839755 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2zx" event={"ID":"00dbc3bd-a09d-48cd-98b6-3543bae44d2d","Type":"ContainerStarted","Data":"3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1"} Feb 20 00:22:13 crc kubenswrapper[5119]: I0220 00:22:13.866043 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8g2zx" podStartSLOduration=4.213305184 podStartE2EDuration="4.866015529s" podCreationTimestamp="2026-02-20 00:22:09 +0000 UTC" firstStartedPulling="2026-02-20 00:22:10.799153997 +0000 UTC m=+712.778118339" lastFinishedPulling="2026-02-20 00:22:11.451864352 +0000 UTC m=+713.430828684" observedRunningTime="2026-02-20 00:22:13.863741138 +0000 UTC m=+715.842705510" watchObservedRunningTime="2026-02-20 00:22:13.866015529 +0000 UTC m=+715.844979871" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.219367 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7"] Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.248437 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7"] Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.248694 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.251579 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.372458 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.373012 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96ps7\" (UniqueName: \"kubernetes.io/projected/27c1b674-a630-4652-8c16-55724136f7d8-kube-api-access-96ps7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.373087 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.474700 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96ps7\" (UniqueName: \"kubernetes.io/projected/27c1b674-a630-4652-8c16-55724136f7d8-kube-api-access-96ps7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.474812 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.474949 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.475435 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.475502 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.500038 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-96ps7\" (UniqueName: \"kubernetes.io/projected/27c1b674-a630-4652-8c16-55724136f7d8-kube-api-access-96ps7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.573915 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:17 crc kubenswrapper[5119]: I0220 00:22:17.863494 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7"] Feb 20 00:22:17 crc kubenswrapper[5119]: W0220 00:22:17.874692 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27c1b674_a630_4652_8c16_55724136f7d8.slice/crio-7268db69858ee835143efd36a33c6e3742e281f93fdd8162c76fb5811e4f7379 WatchSource:0}: Error finding container 7268db69858ee835143efd36a33c6e3742e281f93fdd8162c76fb5811e4f7379: Status 404 returned error can't find the container with id 7268db69858ee835143efd36a33c6e3742e281f93fdd8162c76fb5811e4f7379 Feb 20 00:22:18 crc kubenswrapper[5119]: I0220 00:22:18.880020 5119 generic.go:358] "Generic (PLEG): container finished" podID="27c1b674-a630-4652-8c16-55724136f7d8" containerID="f0bec5f1c6064c56dd849445731bb710403ef43e126816ee2a5caf72c5f55723" exitCode=0 Feb 20 00:22:18 crc kubenswrapper[5119]: I0220 00:22:18.880623 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" event={"ID":"27c1b674-a630-4652-8c16-55724136f7d8","Type":"ContainerDied","Data":"f0bec5f1c6064c56dd849445731bb710403ef43e126816ee2a5caf72c5f55723"} Feb 20 00:22:18 crc kubenswrapper[5119]: I0220 00:22:18.880666 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" event={"ID":"27c1b674-a630-4652-8c16-55724136f7d8","Type":"ContainerStarted","Data":"7268db69858ee835143efd36a33c6e3742e281f93fdd8162c76fb5811e4f7379"} Feb 20 00:22:19 crc kubenswrapper[5119]: I0220 00:22:19.952791 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:19 crc kubenswrapper[5119]: I0220 00:22:19.952898 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.029910 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh"] Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.038596 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.038685 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.048152 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh"] Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.113824 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.113914 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vpv\" (UniqueName: \"kubernetes.io/projected/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-kube-api-access-s8vpv\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.113978 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.215424 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.215501 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.215634 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s8vpv\" (UniqueName: \"kubernetes.io/projected/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-kube-api-access-s8vpv\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.216187 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.216370 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.240861 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8vpv\" (UniqueName: \"kubernetes.io/projected/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-kube-api-access-s8vpv\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.404354 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.628925 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh"] Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.898568 5119 generic.go:358] "Generic (PLEG): container finished" podID="27c1b674-a630-4652-8c16-55724136f7d8" containerID="052ecbc172415020ea1e60fa5b1fb72fdc7ca36e1c3e96497901cb9850709d23" exitCode=0 Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.898765 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" event={"ID":"27c1b674-a630-4652-8c16-55724136f7d8","Type":"ContainerDied","Data":"052ecbc172415020ea1e60fa5b1fb72fdc7ca36e1c3e96497901cb9850709d23"} Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.903877 5119 generic.go:358] "Generic (PLEG): container finished" podID="d2ed2092-f16b-465e-b93a-f7c4dd8368e0" containerID="bec2c1078295cb5db55972d1034230b0a64aadde7caebe90061e90492ba77447" exitCode=0 Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.903932 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" event={"ID":"d2ed2092-f16b-465e-b93a-f7c4dd8368e0","Type":"ContainerDied","Data":"bec2c1078295cb5db55972d1034230b0a64aadde7caebe90061e90492ba77447"} Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.903972 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" event={"ID":"d2ed2092-f16b-465e-b93a-f7c4dd8368e0","Type":"ContainerStarted","Data":"36146a6a5520a8b9ef099580acd2faa06d80bf1ce9dd38e15a7114b904c84ce4"} Feb 20 00:22:20 crc kubenswrapper[5119]: I0220 00:22:20.975390 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.231818 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6"] Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.239501 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.247585 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6"] Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.330866 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.330948 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.331175 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b949\" (UniqueName: \"kubernetes.io/projected/77de016a-acf4-43e7-a390-e30a3d712904-kube-api-access-4b949\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.388435 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pw7fz"] Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.394817 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.408210 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pw7fz"] Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.433378 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.433486 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4b949\" (UniqueName: \"kubernetes.io/projected/77de016a-acf4-43e7-a390-e30a3d712904-kube-api-access-4b949\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.433527 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.434003 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.434140 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.464626 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b949\" (UniqueName: \"kubernetes.io/projected/77de016a-acf4-43e7-a390-e30a3d712904-kube-api-access-4b949\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.536252 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-catalog-content\") pod \"redhat-operators-pw7fz\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.538203 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfskj\" (UniqueName: \"kubernetes.io/projected/6239da79-2b3e-4abf-802f-e80bdec9bf1c-kube-api-access-zfskj\") pod \"redhat-operators-pw7fz\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.538300 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-utilities\") pod \"redhat-operators-pw7fz\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.587839 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.639738 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-catalog-content\") pod \"redhat-operators-pw7fz\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.639872 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zfskj\" (UniqueName: \"kubernetes.io/projected/6239da79-2b3e-4abf-802f-e80bdec9bf1c-kube-api-access-zfskj\") pod \"redhat-operators-pw7fz\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.639903 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-utilities\") pod \"redhat-operators-pw7fz\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.640386 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-catalog-content\") pod \"redhat-operators-pw7fz\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.640794 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-utilities\") pod \"redhat-operators-pw7fz\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.662162 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfskj\" (UniqueName: \"kubernetes.io/projected/6239da79-2b3e-4abf-802f-e80bdec9bf1c-kube-api-access-zfskj\") pod \"redhat-operators-pw7fz\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.761698 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.802953 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6"] Feb 20 00:22:21 crc kubenswrapper[5119]: W0220 00:22:21.842363 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77de016a_acf4_43e7_a390_e30a3d712904.slice/crio-6ef36f17c0f7bd4e4014b6d57c02f64636a70948ab566ceddfa4f7f329ee079b WatchSource:0}: Error finding container 6ef36f17c0f7bd4e4014b6d57c02f64636a70948ab566ceddfa4f7f329ee079b: Status 404 returned error can't find the container with id 6ef36f17c0f7bd4e4014b6d57c02f64636a70948ab566ceddfa4f7f329ee079b Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.911461 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" event={"ID":"77de016a-acf4-43e7-a390-e30a3d712904","Type":"ContainerStarted","Data":"6ef36f17c0f7bd4e4014b6d57c02f64636a70948ab566ceddfa4f7f329ee079b"} Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.913643 5119 generic.go:358] "Generic (PLEG): container finished" podID="27c1b674-a630-4652-8c16-55724136f7d8" containerID="8f7344fb8387265a510db1876242a6a12e228b02783e7acdc96adf9ae7f1c659" exitCode=0 Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.913785 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" event={"ID":"27c1b674-a630-4652-8c16-55724136f7d8","Type":"ContainerDied","Data":"8f7344fb8387265a510db1876242a6a12e228b02783e7acdc96adf9ae7f1c659"} Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.916508 5119 generic.go:358] "Generic (PLEG): container finished" podID="d2ed2092-f16b-465e-b93a-f7c4dd8368e0" containerID="51abdab4902a4a9fff61df5017fdf5f46d0099c7adfcf32a173ff8cd5f88e4f8" exitCode=0 Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.917341 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" event={"ID":"d2ed2092-f16b-465e-b93a-f7c4dd8368e0","Type":"ContainerDied","Data":"51abdab4902a4a9fff61df5017fdf5f46d0099c7adfcf32a173ff8cd5f88e4f8"} Feb 20 00:22:21 crc kubenswrapper[5119]: I0220 00:22:21.990234 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pw7fz"] Feb 20 00:22:22 crc kubenswrapper[5119]: I0220 00:22:22.924588 5119 generic.go:358] "Generic (PLEG): container finished" podID="d2ed2092-f16b-465e-b93a-f7c4dd8368e0" containerID="94ab9c668c85c1156dd882b65ce4ae1b2a567e48a327b1904867c8e5ccb866fa" exitCode=0 Feb 20 00:22:22 crc kubenswrapper[5119]: I0220 00:22:22.924642 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" event={"ID":"d2ed2092-f16b-465e-b93a-f7c4dd8368e0","Type":"ContainerDied","Data":"94ab9c668c85c1156dd882b65ce4ae1b2a567e48a327b1904867c8e5ccb866fa"} Feb 20 00:22:22 crc kubenswrapper[5119]: I0220 00:22:22.926220 5119 generic.go:358] "Generic (PLEG): container finished" podID="77de016a-acf4-43e7-a390-e30a3d712904" containerID="b82d1ef7597de6fac6306bfd90cc893b8cae074745d7132aa8172bd31efcf700" exitCode=0 Feb 20 00:22:22 crc kubenswrapper[5119]: I0220 00:22:22.926288 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" event={"ID":"77de016a-acf4-43e7-a390-e30a3d712904","Type":"ContainerDied","Data":"b82d1ef7597de6fac6306bfd90cc893b8cae074745d7132aa8172bd31efcf700"} Feb 20 00:22:22 crc kubenswrapper[5119]: I0220 00:22:22.927578 5119 generic.go:358] "Generic (PLEG): container finished" podID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerID="5196aaf10ec3d5862337e6e725ae10a18fa24c05ed644916c699f21ee5309721" exitCode=0 Feb 20 00:22:22 crc kubenswrapper[5119]: I0220 00:22:22.927672 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw7fz" event={"ID":"6239da79-2b3e-4abf-802f-e80bdec9bf1c","Type":"ContainerDied","Data":"5196aaf10ec3d5862337e6e725ae10a18fa24c05ed644916c699f21ee5309721"} Feb 20 00:22:22 crc kubenswrapper[5119]: I0220 00:22:22.927711 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw7fz" event={"ID":"6239da79-2b3e-4abf-802f-e80bdec9bf1c","Type":"ContainerStarted","Data":"ed8dbb4cbacb1c4ba04a238b18de2dc06d467dec1d5fefd9168c9fbbd81af6ac"} Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.144553 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.258975 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96ps7\" (UniqueName: \"kubernetes.io/projected/27c1b674-a630-4652-8c16-55724136f7d8-kube-api-access-96ps7\") pod \"27c1b674-a630-4652-8c16-55724136f7d8\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.259086 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-util\") pod \"27c1b674-a630-4652-8c16-55724136f7d8\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.259181 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-bundle\") pod \"27c1b674-a630-4652-8c16-55724136f7d8\" (UID: \"27c1b674-a630-4652-8c16-55724136f7d8\") " Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.261412 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-bundle" (OuterVolumeSpecName: "bundle") pod "27c1b674-a630-4652-8c16-55724136f7d8" (UID: "27c1b674-a630-4652-8c16-55724136f7d8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.266173 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c1b674-a630-4652-8c16-55724136f7d8-kube-api-access-96ps7" (OuterVolumeSpecName: "kube-api-access-96ps7") pod "27c1b674-a630-4652-8c16-55724136f7d8" (UID: "27c1b674-a630-4652-8c16-55724136f7d8"). InnerVolumeSpecName "kube-api-access-96ps7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.269753 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-util" (OuterVolumeSpecName: "util") pod "27c1b674-a630-4652-8c16-55724136f7d8" (UID: "27c1b674-a630-4652-8c16-55724136f7d8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.360868 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.360902 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-96ps7\" (UniqueName: \"kubernetes.io/projected/27c1b674-a630-4652-8c16-55724136f7d8-kube-api-access-96ps7\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.360913 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27c1b674-a630-4652-8c16-55724136f7d8-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.935669 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw7fz" event={"ID":"6239da79-2b3e-4abf-802f-e80bdec9bf1c","Type":"ContainerStarted","Data":"f676f7faafd8b74d62ba53cd01005e0284818f1975cc08897260c790bd1a7dae"} Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.940342 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.940432 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7" event={"ID":"27c1b674-a630-4652-8c16-55724136f7d8","Type":"ContainerDied","Data":"7268db69858ee835143efd36a33c6e3742e281f93fdd8162c76fb5811e4f7379"} Feb 20 00:22:23 crc kubenswrapper[5119]: I0220 00:22:23.940454 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7268db69858ee835143efd36a33c6e3742e281f93fdd8162c76fb5811e4f7379" Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.211145 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.273701 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-util\") pod \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.273827 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-bundle\") pod \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.273888 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8vpv\" (UniqueName: \"kubernetes.io/projected/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-kube-api-access-s8vpv\") pod \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\" (UID: \"d2ed2092-f16b-465e-b93a-f7c4dd8368e0\") " Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.275645 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-bundle" (OuterVolumeSpecName: "bundle") pod "d2ed2092-f16b-465e-b93a-f7c4dd8368e0" (UID: "d2ed2092-f16b-465e-b93a-f7c4dd8368e0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.285935 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-util" (OuterVolumeSpecName: "util") pod "d2ed2092-f16b-465e-b93a-f7c4dd8368e0" (UID: "d2ed2092-f16b-465e-b93a-f7c4dd8368e0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.288413 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-kube-api-access-s8vpv" (OuterVolumeSpecName: "kube-api-access-s8vpv") pod "d2ed2092-f16b-465e-b93a-f7c4dd8368e0" (UID: "d2ed2092-f16b-465e-b93a-f7c4dd8368e0"). InnerVolumeSpecName "kube-api-access-s8vpv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.376059 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.376098 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.376107 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s8vpv\" (UniqueName: \"kubernetes.io/projected/d2ed2092-f16b-465e-b93a-f7c4dd8368e0-kube-api-access-s8vpv\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.952748 5119 generic.go:358] "Generic (PLEG): container finished" podID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerID="f676f7faafd8b74d62ba53cd01005e0284818f1975cc08897260c790bd1a7dae" exitCode=0 Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.952897 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw7fz" event={"ID":"6239da79-2b3e-4abf-802f-e80bdec9bf1c","Type":"ContainerDied","Data":"f676f7faafd8b74d62ba53cd01005e0284818f1975cc08897260c790bd1a7dae"} Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.956297 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" event={"ID":"d2ed2092-f16b-465e-b93a-f7c4dd8368e0","Type":"ContainerDied","Data":"36146a6a5520a8b9ef099580acd2faa06d80bf1ce9dd38e15a7114b904c84ce4"} Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.956339 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36146a6a5520a8b9ef099580acd2faa06d80bf1ce9dd38e15a7114b904c84ce4" Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.956306 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh" Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.974046 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8g2zx"] Feb 20 00:22:24 crc kubenswrapper[5119]: I0220 00:22:24.974387 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8g2zx" podUID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" containerName="registry-server" containerID="cri-o://3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1" gracePeriod=2 Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.823997 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.898850 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-utilities\") pod \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.899238 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-catalog-content\") pod \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.899280 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2ttq\" (UniqueName: \"kubernetes.io/projected/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-kube-api-access-l2ttq\") pod \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\" (UID: \"00dbc3bd-a09d-48cd-98b6-3543bae44d2d\") " Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.901418 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-utilities" (OuterVolumeSpecName: "utilities") pod "00dbc3bd-a09d-48cd-98b6-3543bae44d2d" (UID: "00dbc3bd-a09d-48cd-98b6-3543bae44d2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.924639 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-kube-api-access-l2ttq" (OuterVolumeSpecName: "kube-api-access-l2ttq") pod "00dbc3bd-a09d-48cd-98b6-3543bae44d2d" (UID: "00dbc3bd-a09d-48cd-98b6-3543bae44d2d"). InnerVolumeSpecName "kube-api-access-l2ttq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.952010 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00dbc3bd-a09d-48cd-98b6-3543bae44d2d" (UID: "00dbc3bd-a09d-48cd-98b6-3543bae44d2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.966188 5119 generic.go:358] "Generic (PLEG): container finished" podID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" containerID="3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1" exitCode=0 Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.966280 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2zx" event={"ID":"00dbc3bd-a09d-48cd-98b6-3543bae44d2d","Type":"ContainerDied","Data":"3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1"} Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.966324 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2zx" event={"ID":"00dbc3bd-a09d-48cd-98b6-3543bae44d2d","Type":"ContainerDied","Data":"0310a08739543e2ca9104fb8eca2a9f69294a9b5d2b5ddedddd5f628bf83d53d"} Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.966345 5119 scope.go:117] "RemoveContainer" containerID="3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1" Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.966375 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2zx" Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.970076 5119 generic.go:358] "Generic (PLEG): container finished" podID="77de016a-acf4-43e7-a390-e30a3d712904" containerID="363e7126392b24f1b96cbdd78273404c90ea155db20af9037dd66629efaf73c4" exitCode=0 Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.970120 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" event={"ID":"77de016a-acf4-43e7-a390-e30a3d712904","Type":"ContainerDied","Data":"363e7126392b24f1b96cbdd78273404c90ea155db20af9037dd66629efaf73c4"} Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.973038 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw7fz" event={"ID":"6239da79-2b3e-4abf-802f-e80bdec9bf1c","Type":"ContainerStarted","Data":"8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072"} Feb 20 00:22:25 crc kubenswrapper[5119]: I0220 00:22:25.993081 5119 scope.go:117] "RemoveContainer" containerID="660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.002723 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.002770 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.002788 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l2ttq\" (UniqueName: \"kubernetes.io/projected/00dbc3bd-a09d-48cd-98b6-3543bae44d2d-kube-api-access-l2ttq\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.024869 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pw7fz" podStartSLOduration=4.437829703 podStartE2EDuration="5.024848168s" podCreationTimestamp="2026-02-20 00:22:21 +0000 UTC" firstStartedPulling="2026-02-20 00:22:22.928574147 +0000 UTC m=+724.907538439" lastFinishedPulling="2026-02-20 00:22:23.515592602 +0000 UTC m=+725.494556904" observedRunningTime="2026-02-20 00:22:26.019629367 +0000 UTC m=+727.998593659" watchObservedRunningTime="2026-02-20 00:22:26.024848168 +0000 UTC m=+728.003812460" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.058241 5119 scope.go:117] "RemoveContainer" containerID="498fdac09ff07df54c57a65e5f54d6d82c612a5a3dbed8eb05fc44f6e7d1659a" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.094783 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8g2zx"] Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.095028 5119 scope.go:117] "RemoveContainer" containerID="3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.101364 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8g2zx"] Feb 20 00:22:26 crc kubenswrapper[5119]: E0220 00:22:26.103709 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1\": container with ID starting with 3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1 not found: ID does not exist" containerID="3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.103774 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1"} err="failed to get container status \"3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1\": rpc error: code = NotFound desc = could not find container \"3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1\": container with ID starting with 3d8d41d4e3918c3fb6df45ca4a9c0f531a2581d9d8e4b6b6d19df7cf64a528c1 not found: ID does not exist" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.103804 5119 scope.go:117] "RemoveContainer" containerID="660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe" Feb 20 00:22:26 crc kubenswrapper[5119]: E0220 00:22:26.105358 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe\": container with ID starting with 660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe not found: ID does not exist" containerID="660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.105383 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe"} err="failed to get container status \"660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe\": rpc error: code = NotFound desc = could not find container \"660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe\": container with ID starting with 660d205198ddb68dec48661bceded4e48d0661d42915f8f0e5944a7cc6dcc3fe not found: ID does not exist" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.105399 5119 scope.go:117] "RemoveContainer" containerID="498fdac09ff07df54c57a65e5f54d6d82c612a5a3dbed8eb05fc44f6e7d1659a" Feb 20 00:22:26 crc kubenswrapper[5119]: E0220 00:22:26.109839 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498fdac09ff07df54c57a65e5f54d6d82c612a5a3dbed8eb05fc44f6e7d1659a\": container with ID starting with 498fdac09ff07df54c57a65e5f54d6d82c612a5a3dbed8eb05fc44f6e7d1659a not found: ID does not exist" containerID="498fdac09ff07df54c57a65e5f54d6d82c612a5a3dbed8eb05fc44f6e7d1659a" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.109872 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498fdac09ff07df54c57a65e5f54d6d82c612a5a3dbed8eb05fc44f6e7d1659a"} err="failed to get container status \"498fdac09ff07df54c57a65e5f54d6d82c612a5a3dbed8eb05fc44f6e7d1659a\": rpc error: code = NotFound desc = could not find container \"498fdac09ff07df54c57a65e5f54d6d82c612a5a3dbed8eb05fc44f6e7d1659a\": container with ID starting with 498fdac09ff07df54c57a65e5f54d6d82c612a5a3dbed8eb05fc44f6e7d1659a not found: ID does not exist" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.174816 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dtnzd"] Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175363 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2ed2092-f16b-465e-b93a-f7c4dd8368e0" containerName="pull" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175384 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ed2092-f16b-465e-b93a-f7c4dd8368e0" containerName="pull" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175394 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2ed2092-f16b-465e-b93a-f7c4dd8368e0" containerName="extract" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175401 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ed2092-f16b-465e-b93a-f7c4dd8368e0" containerName="extract" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175412 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27c1b674-a630-4652-8c16-55724136f7d8" containerName="extract" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175420 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c1b674-a630-4652-8c16-55724136f7d8" containerName="extract" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175430 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27c1b674-a630-4652-8c16-55724136f7d8" containerName="pull" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175439 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c1b674-a630-4652-8c16-55724136f7d8" containerName="pull" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175450 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" containerName="extract-utilities" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175459 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" containerName="extract-utilities" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175469 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27c1b674-a630-4652-8c16-55724136f7d8" containerName="util" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175486 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c1b674-a630-4652-8c16-55724136f7d8" containerName="util" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175497 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" containerName="registry-server" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175504 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" containerName="registry-server" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175514 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2ed2092-f16b-465e-b93a-f7c4dd8368e0" containerName="util" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175521 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ed2092-f16b-465e-b93a-f7c4dd8368e0" containerName="util" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175563 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" containerName="extract-content" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175571 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" containerName="extract-content" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175672 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="27c1b674-a630-4652-8c16-55724136f7d8" containerName="extract" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175684 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" containerName="registry-server" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.175694 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="d2ed2092-f16b-465e-b93a-f7c4dd8368e0" containerName="extract" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.178808 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.186282 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dtnzd"] Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.326169 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6sln\" (UniqueName: \"kubernetes.io/projected/e44d5e11-c935-4d15-ac78-24856f2549b1-kube-api-access-z6sln\") pod \"community-operators-dtnzd\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.326239 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-utilities\") pod \"community-operators-dtnzd\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.326286 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-catalog-content\") pod \"community-operators-dtnzd\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.427797 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-utilities\") pod \"community-operators-dtnzd\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.427867 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-catalog-content\") pod \"community-operators-dtnzd\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.427974 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z6sln\" (UniqueName: \"kubernetes.io/projected/e44d5e11-c935-4d15-ac78-24856f2549b1-kube-api-access-z6sln\") pod \"community-operators-dtnzd\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.428420 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-utilities\") pod \"community-operators-dtnzd\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.428473 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-catalog-content\") pod \"community-operators-dtnzd\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.445497 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6sln\" (UniqueName: \"kubernetes.io/projected/e44d5e11-c935-4d15-ac78-24856f2549b1-kube-api-access-z6sln\") pod \"community-operators-dtnzd\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.537694 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.732035 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dtnzd"] Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.864097 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00dbc3bd-a09d-48cd-98b6-3543bae44d2d" path="/var/lib/kubelet/pods/00dbc3bd-a09d-48cd-98b6-3543bae44d2d/volumes" Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.980431 5119 generic.go:358] "Generic (PLEG): container finished" podID="77de016a-acf4-43e7-a390-e30a3d712904" containerID="732454799b29e81a35872d16ee9be871b55a15ac4c5ae347793f983e405df301" exitCode=0 Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.980560 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" event={"ID":"77de016a-acf4-43e7-a390-e30a3d712904","Type":"ContainerDied","Data":"732454799b29e81a35872d16ee9be871b55a15ac4c5ae347793f983e405df301"} Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.983707 5119 generic.go:358] "Generic (PLEG): container finished" podID="e44d5e11-c935-4d15-ac78-24856f2549b1" containerID="ff355e5d602ad6d78a7676f38b45ef3cb88fe6139bdb301ad0d09fbc85c577c4" exitCode=0 Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.983763 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtnzd" event={"ID":"e44d5e11-c935-4d15-ac78-24856f2549b1","Type":"ContainerDied","Data":"ff355e5d602ad6d78a7676f38b45ef3cb88fe6139bdb301ad0d09fbc85c577c4"} Feb 20 00:22:26 crc kubenswrapper[5119]: I0220 00:22:26.983948 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtnzd" event={"ID":"e44d5e11-c935-4d15-ac78-24856f2549b1","Type":"ContainerStarted","Data":"622292e0911de6d309e3bf5eb9623e0e4655ae3566b68b3f76a0654f36e2c354"} Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.251525 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.354061 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-util\") pod \"77de016a-acf4-43e7-a390-e30a3d712904\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.354120 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-bundle\") pod \"77de016a-acf4-43e7-a390-e30a3d712904\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.354157 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b949\" (UniqueName: \"kubernetes.io/projected/77de016a-acf4-43e7-a390-e30a3d712904-kube-api-access-4b949\") pod \"77de016a-acf4-43e7-a390-e30a3d712904\" (UID: \"77de016a-acf4-43e7-a390-e30a3d712904\") " Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.356107 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-bundle" (OuterVolumeSpecName: "bundle") pod "77de016a-acf4-43e7-a390-e30a3d712904" (UID: "77de016a-acf4-43e7-a390-e30a3d712904"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.362677 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77de016a-acf4-43e7-a390-e30a3d712904-kube-api-access-4b949" (OuterVolumeSpecName: "kube-api-access-4b949") pod "77de016a-acf4-43e7-a390-e30a3d712904" (UID: "77de016a-acf4-43e7-a390-e30a3d712904"). InnerVolumeSpecName "kube-api-access-4b949". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.364038 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-util" (OuterVolumeSpecName: "util") pod "77de016a-acf4-43e7-a390-e30a3d712904" (UID: "77de016a-acf4-43e7-a390-e30a3d712904"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.455478 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.455524 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77de016a-acf4-43e7-a390-e30a3d712904-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.455561 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4b949\" (UniqueName: \"kubernetes.io/projected/77de016a-acf4-43e7-a390-e30a3d712904-kube-api-access-4b949\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.995941 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" event={"ID":"77de016a-acf4-43e7-a390-e30a3d712904","Type":"ContainerDied","Data":"6ef36f17c0f7bd4e4014b6d57c02f64636a70948ab566ceddfa4f7f329ee079b"} Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.996381 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ef36f17c0f7bd4e4014b6d57c02f64636a70948ab566ceddfa4f7f329ee079b" Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.995975 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6" Feb 20 00:22:28 crc kubenswrapper[5119]: I0220 00:22:28.998024 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtnzd" event={"ID":"e44d5e11-c935-4d15-ac78-24856f2549b1","Type":"ContainerStarted","Data":"4cd110886c025e103a526fc4af7192223f96973c3b5b5c98625754ba4275e3f5"} Feb 20 00:22:31 crc kubenswrapper[5119]: I0220 00:22:31.010419 5119 generic.go:358] "Generic (PLEG): container finished" podID="e44d5e11-c935-4d15-ac78-24856f2549b1" containerID="4cd110886c025e103a526fc4af7192223f96973c3b5b5c98625754ba4275e3f5" exitCode=0 Feb 20 00:22:31 crc kubenswrapper[5119]: I0220 00:22:31.010493 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtnzd" event={"ID":"e44d5e11-c935-4d15-ac78-24856f2549b1","Type":"ContainerDied","Data":"4cd110886c025e103a526fc4af7192223f96973c3b5b5c98625754ba4275e3f5"} Feb 20 00:22:31 crc kubenswrapper[5119]: I0220 00:22:31.762173 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:31 crc kubenswrapper[5119]: I0220 00:22:31.762474 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:32 crc kubenswrapper[5119]: I0220 00:22:32.817066 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pw7fz" podUID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerName="registry-server" probeResult="failure" output=< Feb 20 00:22:32 crc kubenswrapper[5119]: timeout: failed to connect service ":50051" within 1s Feb 20 00:22:32 crc kubenswrapper[5119]: > Feb 20 00:22:33 crc kubenswrapper[5119]: I0220 00:22:33.021821 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtnzd" event={"ID":"e44d5e11-c935-4d15-ac78-24856f2549b1","Type":"ContainerStarted","Data":"a9f3c67db71b8b2e64c73502375236b41f8f10a28ba54bb6673d8b48fc91029a"} Feb 20 00:22:33 crc kubenswrapper[5119]: I0220 00:22:33.042195 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dtnzd" podStartSLOduration=5.47088077 podStartE2EDuration="7.042176298s" podCreationTimestamp="2026-02-20 00:22:26 +0000 UTC" firstStartedPulling="2026-02-20 00:22:26.984604707 +0000 UTC m=+728.963569009" lastFinishedPulling="2026-02-20 00:22:28.555900245 +0000 UTC m=+730.534864537" observedRunningTime="2026-02-20 00:22:33.03926258 +0000 UTC m=+735.018226882" watchObservedRunningTime="2026-02-20 00:22:33.042176298 +0000 UTC m=+735.021140600" Feb 20 00:22:36 crc kubenswrapper[5119]: I0220 00:22:36.538237 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:36 crc kubenswrapper[5119]: I0220 00:22:36.539644 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:36 crc kubenswrapper[5119]: I0220 00:22:36.596633 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.121510 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.292327 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-6fb5cffb47-mmr88"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.292968 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77de016a-acf4-43e7-a390-e30a3d712904" containerName="util" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.292992 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="77de016a-acf4-43e7-a390-e30a3d712904" containerName="util" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.293006 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77de016a-acf4-43e7-a390-e30a3d712904" containerName="extract" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.293012 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="77de016a-acf4-43e7-a390-e30a3d712904" containerName="extract" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.293030 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77de016a-acf4-43e7-a390-e30a3d712904" containerName="pull" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.293036 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="77de016a-acf4-43e7-a390-e30a3d712904" containerName="pull" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.293152 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="77de016a-acf4-43e7-a390-e30a3d712904" containerName="extract" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.342853 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6fb5cffb47-mmr88"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.343062 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.345198 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-x8rfv\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.345512 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.346396 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.346471 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.363900 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9466ff4e-af97-45a4-a238-2a352002e378-apiservice-cert\") pod \"elastic-operator-6fb5cffb47-mmr88\" (UID: \"9466ff4e-af97-45a4-a238-2a352002e378\") " pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.363988 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9466ff4e-af97-45a4-a238-2a352002e378-webhook-cert\") pod \"elastic-operator-6fb5cffb47-mmr88\" (UID: \"9466ff4e-af97-45a4-a238-2a352002e378\") " pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.364037 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcssg\" (UniqueName: \"kubernetes.io/projected/9466ff4e-af97-45a4-a238-2a352002e378-kube-api-access-lcssg\") pod \"elastic-operator-6fb5cffb47-mmr88\" (UID: \"9466ff4e-af97-45a4-a238-2a352002e378\") " pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.399525 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7cddz"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.407270 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7cddz" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.409944 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-rc78q\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.410098 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.410998 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.412224 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7cddz"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.464937 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9466ff4e-af97-45a4-a238-2a352002e378-apiservice-cert\") pod \"elastic-operator-6fb5cffb47-mmr88\" (UID: \"9466ff4e-af97-45a4-a238-2a352002e378\") " pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.464994 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9466ff4e-af97-45a4-a238-2a352002e378-webhook-cert\") pod \"elastic-operator-6fb5cffb47-mmr88\" (UID: \"9466ff4e-af97-45a4-a238-2a352002e378\") " pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.465034 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcssg\" (UniqueName: \"kubernetes.io/projected/9466ff4e-af97-45a4-a238-2a352002e378-kube-api-access-lcssg\") pod \"elastic-operator-6fb5cffb47-mmr88\" (UID: \"9466ff4e-af97-45a4-a238-2a352002e378\") " pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.465287 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znzkd\" (UniqueName: \"kubernetes.io/projected/87ccd0bd-19cf-436b-975a-01bb63ec761f-kube-api-access-znzkd\") pod \"obo-prometheus-operator-9bc85b4bf-7cddz\" (UID: \"87ccd0bd-19cf-436b-975a-01bb63ec761f\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7cddz" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.470730 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9466ff4e-af97-45a4-a238-2a352002e378-webhook-cert\") pod \"elastic-operator-6fb5cffb47-mmr88\" (UID: \"9466ff4e-af97-45a4-a238-2a352002e378\") " pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.474697 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9466ff4e-af97-45a4-a238-2a352002e378-apiservice-cert\") pod \"elastic-operator-6fb5cffb47-mmr88\" (UID: \"9466ff4e-af97-45a4-a238-2a352002e378\") " pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.487181 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcssg\" (UniqueName: \"kubernetes.io/projected/9466ff4e-af97-45a4-a238-2a352002e378-kube-api-access-lcssg\") pod \"elastic-operator-6fb5cffb47-mmr88\" (UID: \"9466ff4e-af97-45a4-a238-2a352002e378\") " pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.527485 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.531220 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.533032 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-5xz65\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.533508 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.540677 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.550665 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.557528 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.561923 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.566238 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-znzkd\" (UniqueName: \"kubernetes.io/projected/87ccd0bd-19cf-436b-975a-01bb63ec761f-kube-api-access-znzkd\") pod \"obo-prometheus-operator-9bc85b4bf-7cddz\" (UID: \"87ccd0bd-19cf-436b-975a-01bb63ec761f\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7cddz" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.566302 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47019409-3e65-4a88-89cd-220570c1dea3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx\" (UID: \"47019409-3e65-4a88-89cd-220570c1dea3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.566335 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47019409-3e65-4a88-89cd-220570c1dea3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx\" (UID: \"47019409-3e65-4a88-89cd-220570c1dea3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.626909 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-znzkd\" (UniqueName: \"kubernetes.io/projected/87ccd0bd-19cf-436b-975a-01bb63ec761f-kube-api-access-znzkd\") pod \"obo-prometheus-operator-9bc85b4bf-7cddz\" (UID: \"87ccd0bd-19cf-436b-975a-01bb63ec761f\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7cddz" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.661172 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.669311 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47019409-3e65-4a88-89cd-220570c1dea3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx\" (UID: \"47019409-3e65-4a88-89cd-220570c1dea3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.669369 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47019409-3e65-4a88-89cd-220570c1dea3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx\" (UID: \"47019409-3e65-4a88-89cd-220570c1dea3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.669430 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c7f3c09e-f40f-4517-81b4-9be7d5a4922c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr\" (UID: \"c7f3c09e-f40f-4517-81b4-9be7d5a4922c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.669456 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c7f3c09e-f40f-4517-81b4-9be7d5a4922c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr\" (UID: \"c7f3c09e-f40f-4517-81b4-9be7d5a4922c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.680508 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47019409-3e65-4a88-89cd-220570c1dea3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx\" (UID: \"47019409-3e65-4a88-89cd-220570c1dea3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.686657 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47019409-3e65-4a88-89cd-220570c1dea3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx\" (UID: \"47019409-3e65-4a88-89cd-220570c1dea3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.724624 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7cddz" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.733826 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-ldlj9"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.743245 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-ldlj9" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.745503 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-jskhs\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.745671 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.752949 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-ldlj9"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.770699 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlwr\" (UniqueName: \"kubernetes.io/projected/cf6fac41-5b87-46d8-bc02-310e87d1b79c-kube-api-access-nqlwr\") pod \"observability-operator-85c68dddb-ldlj9\" (UID: \"cf6fac41-5b87-46d8-bc02-310e87d1b79c\") " pod="openshift-operators/observability-operator-85c68dddb-ldlj9" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.770773 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf6fac41-5b87-46d8-bc02-310e87d1b79c-observability-operator-tls\") pod \"observability-operator-85c68dddb-ldlj9\" (UID: \"cf6fac41-5b87-46d8-bc02-310e87d1b79c\") " pod="openshift-operators/observability-operator-85c68dddb-ldlj9" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.770826 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c7f3c09e-f40f-4517-81b4-9be7d5a4922c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr\" (UID: \"c7f3c09e-f40f-4517-81b4-9be7d5a4922c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.770854 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c7f3c09e-f40f-4517-81b4-9be7d5a4922c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr\" (UID: \"c7f3c09e-f40f-4517-81b4-9be7d5a4922c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.786913 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c7f3c09e-f40f-4517-81b4-9be7d5a4922c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr\" (UID: \"c7f3c09e-f40f-4517-81b4-9be7d5a4922c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.788083 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c7f3c09e-f40f-4517-81b4-9be7d5a4922c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr\" (UID: \"c7f3c09e-f40f-4517-81b4-9be7d5a4922c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.847869 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.863461 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-bzhfk"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.871804 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.884948 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlwr\" (UniqueName: \"kubernetes.io/projected/cf6fac41-5b87-46d8-bc02-310e87d1b79c-kube-api-access-nqlwr\") pod \"observability-operator-85c68dddb-ldlj9\" (UID: \"cf6fac41-5b87-46d8-bc02-310e87d1b79c\") " pod="openshift-operators/observability-operator-85c68dddb-ldlj9" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.885072 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf6fac41-5b87-46d8-bc02-310e87d1b79c-observability-operator-tls\") pod \"observability-operator-85c68dddb-ldlj9\" (UID: \"cf6fac41-5b87-46d8-bc02-310e87d1b79c\") " pod="openshift-operators/observability-operator-85c68dddb-ldlj9" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.889274 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-bzhfk"] Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.889467 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.897143 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf6fac41-5b87-46d8-bc02-310e87d1b79c-observability-operator-tls\") pod \"observability-operator-85c68dddb-ldlj9\" (UID: \"cf6fac41-5b87-46d8-bc02-310e87d1b79c\") " pod="openshift-operators/observability-operator-85c68dddb-ldlj9" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.898579 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-2kflm\"" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.913042 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlwr\" (UniqueName: \"kubernetes.io/projected/cf6fac41-5b87-46d8-bc02-310e87d1b79c-kube-api-access-nqlwr\") pod \"observability-operator-85c68dddb-ldlj9\" (UID: \"cf6fac41-5b87-46d8-bc02-310e87d1b79c\") " pod="openshift-operators/observability-operator-85c68dddb-ldlj9" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.959026 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6fb5cffb47-mmr88"] Feb 20 00:22:37 crc kubenswrapper[5119]: W0220 00:22:37.971732 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9466ff4e_af97_45a4_a238_2a352002e378.slice/crio-43ad72c6f765a0fec43362ba456fa51e177a22e8767bba75e2243724039c198d WatchSource:0}: Error finding container 43ad72c6f765a0fec43362ba456fa51e177a22e8767bba75e2243724039c198d: Status 404 returned error can't find the container with id 43ad72c6f765a0fec43362ba456fa51e177a22e8767bba75e2243724039c198d Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.988229 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-bzhfk\" (UID: \"1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e\") " pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" Feb 20 00:22:37 crc kubenswrapper[5119]: I0220 00:22:37.988297 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7m94\" (UniqueName: \"kubernetes.io/projected/1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e-kube-api-access-d7m94\") pod \"perses-operator-669c9f96b5-bzhfk\" (UID: \"1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e\") " pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.017913 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7cddz"] Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.060358 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-ldlj9" Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.070964 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7cddz" event={"ID":"87ccd0bd-19cf-436b-975a-01bb63ec761f","Type":"ContainerStarted","Data":"a89fb23b2d4572f0541a42dee64bb2acdd2a54ee7fc8565c5fd42958298361b9"} Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.073209 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" event={"ID":"9466ff4e-af97-45a4-a238-2a352002e378","Type":"ContainerStarted","Data":"43ad72c6f765a0fec43362ba456fa51e177a22e8767bba75e2243724039c198d"} Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.091880 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-bzhfk\" (UID: \"1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e\") " pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.091950 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7m94\" (UniqueName: \"kubernetes.io/projected/1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e-kube-api-access-d7m94\") pod \"perses-operator-669c9f96b5-bzhfk\" (UID: \"1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e\") " pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.093758 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-bzhfk\" (UID: \"1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e\") " pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.117941 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7m94\" (UniqueName: \"kubernetes.io/projected/1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e-kube-api-access-d7m94\") pod \"perses-operator-669c9f96b5-bzhfk\" (UID: \"1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e\") " pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.134403 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx"] Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.231946 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.297220 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-ldlj9"] Feb 20 00:22:38 crc kubenswrapper[5119]: W0220 00:22:38.313737 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf6fac41_5b87_46d8_bc02_310e87d1b79c.slice/crio-bca52bdd5b554bde1d5958c0178c1d32c059886c769b87924dab16c78c4c58b6 WatchSource:0}: Error finding container bca52bdd5b554bde1d5958c0178c1d32c059886c769b87924dab16c78c4c58b6: Status 404 returned error can't find the container with id bca52bdd5b554bde1d5958c0178c1d32c059886c769b87924dab16c78c4c58b6 Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.411331 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr"] Feb 20 00:22:38 crc kubenswrapper[5119]: W0220 00:22:38.422416 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7f3c09e_f40f_4517_81b4_9be7d5a4922c.slice/crio-4c4efbc50abf02f40e8a8c84c7589a34562660d56fcd01107bbd09182f3100b3 WatchSource:0}: Error finding container 4c4efbc50abf02f40e8a8c84c7589a34562660d56fcd01107bbd09182f3100b3: Status 404 returned error can't find the container with id 4c4efbc50abf02f40e8a8c84c7589a34562660d56fcd01107bbd09182f3100b3 Feb 20 00:22:38 crc kubenswrapper[5119]: I0220 00:22:38.450302 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-bzhfk"] Feb 20 00:22:39 crc kubenswrapper[5119]: I0220 00:22:39.081761 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-ldlj9" event={"ID":"cf6fac41-5b87-46d8-bc02-310e87d1b79c","Type":"ContainerStarted","Data":"bca52bdd5b554bde1d5958c0178c1d32c059886c769b87924dab16c78c4c58b6"} Feb 20 00:22:39 crc kubenswrapper[5119]: I0220 00:22:39.086759 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr" event={"ID":"c7f3c09e-f40f-4517-81b4-9be7d5a4922c","Type":"ContainerStarted","Data":"4c4efbc50abf02f40e8a8c84c7589a34562660d56fcd01107bbd09182f3100b3"} Feb 20 00:22:39 crc kubenswrapper[5119]: I0220 00:22:39.087946 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx" event={"ID":"47019409-3e65-4a88-89cd-220570c1dea3","Type":"ContainerStarted","Data":"fc0adba0bce8fc171a469d2ad2df9abe737550606d0a15d8ca7df53c7d58f74c"} Feb 20 00:22:39 crc kubenswrapper[5119]: I0220 00:22:39.090647 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" event={"ID":"1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e","Type":"ContainerStarted","Data":"d67cd752c20aac17c06e77c6a30e678042f68093b1232737e869d9968f88d1ea"} Feb 20 00:22:39 crc kubenswrapper[5119]: I0220 00:22:39.384399 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dtnzd"] Feb 20 00:22:40 crc kubenswrapper[5119]: I0220 00:22:40.099723 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dtnzd" podUID="e44d5e11-c935-4d15-ac78-24856f2549b1" containerName="registry-server" containerID="cri-o://a9f3c67db71b8b2e64c73502375236b41f8f10a28ba54bb6673d8b48fc91029a" gracePeriod=2 Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.116822 5119 generic.go:358] "Generic (PLEG): container finished" podID="e44d5e11-c935-4d15-ac78-24856f2549b1" containerID="a9f3c67db71b8b2e64c73502375236b41f8f10a28ba54bb6673d8b48fc91029a" exitCode=0 Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.116893 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtnzd" event={"ID":"e44d5e11-c935-4d15-ac78-24856f2549b1","Type":"ContainerDied","Data":"a9f3c67db71b8b2e64c73502375236b41f8f10a28ba54bb6673d8b48fc91029a"} Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.632159 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.755769 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-utilities\") pod \"e44d5e11-c935-4d15-ac78-24856f2549b1\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.755927 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6sln\" (UniqueName: \"kubernetes.io/projected/e44d5e11-c935-4d15-ac78-24856f2549b1-kube-api-access-z6sln\") pod \"e44d5e11-c935-4d15-ac78-24856f2549b1\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.755958 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-catalog-content\") pod \"e44d5e11-c935-4d15-ac78-24856f2549b1\" (UID: \"e44d5e11-c935-4d15-ac78-24856f2549b1\") " Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.757065 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-utilities" (OuterVolumeSpecName: "utilities") pod "e44d5e11-c935-4d15-ac78-24856f2549b1" (UID: "e44d5e11-c935-4d15-ac78-24856f2549b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.762431 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e44d5e11-c935-4d15-ac78-24856f2549b1-kube-api-access-z6sln" (OuterVolumeSpecName: "kube-api-access-z6sln") pod "e44d5e11-c935-4d15-ac78-24856f2549b1" (UID: "e44d5e11-c935-4d15-ac78-24856f2549b1"). InnerVolumeSpecName "kube-api-access-z6sln". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.811556 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.838388 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e44d5e11-c935-4d15-ac78-24856f2549b1" (UID: "e44d5e11-c935-4d15-ac78-24856f2549b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.858814 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.858861 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z6sln\" (UniqueName: \"kubernetes.io/projected/e44d5e11-c935-4d15-ac78-24856f2549b1-kube-api-access-z6sln\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.858874 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e44d5e11-c935-4d15-ac78-24856f2549b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:41 crc kubenswrapper[5119]: I0220 00:22:41.909388 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.126530 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtnzd" event={"ID":"e44d5e11-c935-4d15-ac78-24856f2549b1","Type":"ContainerDied","Data":"622292e0911de6d309e3bf5eb9623e0e4655ae3566b68b3f76a0654f36e2c354"} Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.126566 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtnzd" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.126596 5119 scope.go:117] "RemoveContainer" containerID="a9f3c67db71b8b2e64c73502375236b41f8f10a28ba54bb6673d8b48fc91029a" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.165867 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.165920 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.172137 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dtnzd"] Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.177919 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dtnzd"] Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.268094 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87"] Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.268819 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e44d5e11-c935-4d15-ac78-24856f2549b1" containerName="registry-server" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.268843 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44d5e11-c935-4d15-ac78-24856f2549b1" containerName="registry-server" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.268867 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e44d5e11-c935-4d15-ac78-24856f2549b1" containerName="extract-utilities" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.268876 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44d5e11-c935-4d15-ac78-24856f2549b1" containerName="extract-utilities" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.268913 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e44d5e11-c935-4d15-ac78-24856f2549b1" containerName="extract-content" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.268923 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44d5e11-c935-4d15-ac78-24856f2549b1" containerName="extract-content" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.269055 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e44d5e11-c935-4d15-ac78-24856f2549b1" containerName="registry-server" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.277183 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.285919 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-r5ggn\"" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.288437 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87"] Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.288846 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.292705 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.369840 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2bcn\" (UniqueName: \"kubernetes.io/projected/33dab54d-b1a9-4703-a6bb-01c9384ddcf1-kube-api-access-q2bcn\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-fgh87\" (UID: \"33dab54d-b1a9-4703-a6bb-01c9384ddcf1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.370081 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/33dab54d-b1a9-4703-a6bb-01c9384ddcf1-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-fgh87\" (UID: \"33dab54d-b1a9-4703-a6bb-01c9384ddcf1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.475685 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q2bcn\" (UniqueName: \"kubernetes.io/projected/33dab54d-b1a9-4703-a6bb-01c9384ddcf1-kube-api-access-q2bcn\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-fgh87\" (UID: \"33dab54d-b1a9-4703-a6bb-01c9384ddcf1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.475751 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/33dab54d-b1a9-4703-a6bb-01c9384ddcf1-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-fgh87\" (UID: \"33dab54d-b1a9-4703-a6bb-01c9384ddcf1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.476320 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/33dab54d-b1a9-4703-a6bb-01c9384ddcf1-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-fgh87\" (UID: \"33dab54d-b1a9-4703-a6bb-01c9384ddcf1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.501654 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2bcn\" (UniqueName: \"kubernetes.io/projected/33dab54d-b1a9-4703-a6bb-01c9384ddcf1-kube-api-access-q2bcn\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-fgh87\" (UID: \"33dab54d-b1a9-4703-a6bb-01c9384ddcf1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.601662 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87" Feb 20 00:22:42 crc kubenswrapper[5119]: I0220 00:22:42.866966 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e44d5e11-c935-4d15-ac78-24856f2549b1" path="/var/lib/kubelet/pods/e44d5e11-c935-4d15-ac78-24856f2549b1/volumes" Feb 20 00:22:45 crc kubenswrapper[5119]: I0220 00:22:45.917829 5119 scope.go:117] "RemoveContainer" containerID="4cd110886c025e103a526fc4af7192223f96973c3b5b5c98625754ba4275e3f5" Feb 20 00:22:47 crc kubenswrapper[5119]: I0220 00:22:47.770031 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pw7fz"] Feb 20 00:22:47 crc kubenswrapper[5119]: I0220 00:22:47.770359 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pw7fz" podUID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerName="registry-server" containerID="cri-o://8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072" gracePeriod=2 Feb 20 00:22:48 crc kubenswrapper[5119]: I0220 00:22:48.194953 5119 generic.go:358] "Generic (PLEG): container finished" podID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerID="8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072" exitCode=0 Feb 20 00:22:48 crc kubenswrapper[5119]: I0220 00:22:48.195156 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw7fz" event={"ID":"6239da79-2b3e-4abf-802f-e80bdec9bf1c","Type":"ContainerDied","Data":"8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072"} Feb 20 00:22:51 crc kubenswrapper[5119]: E0220 00:22:51.815825 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072 is running failed: container process not found" containerID="8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072" cmd=["grpc_health_probe","-addr=:50051"] Feb 20 00:22:51 crc kubenswrapper[5119]: E0220 00:22:51.816680 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072 is running failed: container process not found" containerID="8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072" cmd=["grpc_health_probe","-addr=:50051"] Feb 20 00:22:51 crc kubenswrapper[5119]: E0220 00:22:51.817285 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072 is running failed: container process not found" containerID="8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072" cmd=["grpc_health_probe","-addr=:50051"] Feb 20 00:22:51 crc kubenswrapper[5119]: E0220 00:22:51.817344 5119 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-pw7fz" podUID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerName="registry-server" probeResult="unknown" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.051322 5119 scope.go:117] "RemoveContainer" containerID="ff355e5d602ad6d78a7676f38b45ef3cb88fe6139bdb301ad0d09fbc85c577c4" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.066991 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.220869 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-utilities\") pod \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.221270 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-catalog-content\") pod \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.221460 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfskj\" (UniqueName: \"kubernetes.io/projected/6239da79-2b3e-4abf-802f-e80bdec9bf1c-kube-api-access-zfskj\") pod \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\" (UID: \"6239da79-2b3e-4abf-802f-e80bdec9bf1c\") " Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.222591 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-utilities" (OuterVolumeSpecName: "utilities") pod "6239da79-2b3e-4abf-802f-e80bdec9bf1c" (UID: "6239da79-2b3e-4abf-802f-e80bdec9bf1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.234915 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6239da79-2b3e-4abf-802f-e80bdec9bf1c-kube-api-access-zfskj" (OuterVolumeSpecName: "kube-api-access-zfskj") pod "6239da79-2b3e-4abf-802f-e80bdec9bf1c" (UID: "6239da79-2b3e-4abf-802f-e80bdec9bf1c"). InnerVolumeSpecName "kube-api-access-zfskj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.245835 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pw7fz" event={"ID":"6239da79-2b3e-4abf-802f-e80bdec9bf1c","Type":"ContainerDied","Data":"ed8dbb4cbacb1c4ba04a238b18de2dc06d467dec1d5fefd9168c9fbbd81af6ac"} Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.245861 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pw7fz" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.245897 5119 scope.go:117] "RemoveContainer" containerID="8a1fe8fdff9672f9b2dbf6f73d8aa47f9bb96e16a18d5de3c8903f302f50b072" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.323670 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87"] Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.324071 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.324095 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zfskj\" (UniqueName: \"kubernetes.io/projected/6239da79-2b3e-4abf-802f-e80bdec9bf1c-kube-api-access-zfskj\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.330838 5119 scope.go:117] "RemoveContainer" containerID="f676f7faafd8b74d62ba53cd01005e0284818f1975cc08897260c790bd1a7dae" Feb 20 00:22:52 crc kubenswrapper[5119]: W0220 00:22:52.388292 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33dab54d_b1a9_4703_a6bb_01c9384ddcf1.slice/crio-0fb2023ac715a9b59f6cde0400e6143778cf523137e522c7fc94ccb0c7416552 WatchSource:0}: Error finding container 0fb2023ac715a9b59f6cde0400e6143778cf523137e522c7fc94ccb0c7416552: Status 404 returned error can't find the container with id 0fb2023ac715a9b59f6cde0400e6143778cf523137e522c7fc94ccb0c7416552 Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.393118 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6239da79-2b3e-4abf-802f-e80bdec9bf1c" (UID: "6239da79-2b3e-4abf-802f-e80bdec9bf1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.394151 5119 scope.go:117] "RemoveContainer" containerID="5196aaf10ec3d5862337e6e725ae10a18fa24c05ed644916c699f21ee5309721" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.425098 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6239da79-2b3e-4abf-802f-e80bdec9bf1c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.571262 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pw7fz"] Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.574495 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pw7fz"] Feb 20 00:22:52 crc kubenswrapper[5119]: I0220 00:22:52.864734 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" path="/var/lib/kubelet/pods/6239da79-2b3e-4abf-802f-e80bdec9bf1c/volumes" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.274065 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx" event={"ID":"47019409-3e65-4a88-89cd-220570c1dea3","Type":"ContainerStarted","Data":"2318a2e3d489ca9eb611b5d29267219dfa9d0dda7a1b57c15fd2a0a81dd9f839"} Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.278801 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" event={"ID":"1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e","Type":"ContainerStarted","Data":"1435115060a837f135e8f16e272241931a6bf65582a3350f83ada936b269c1d8"} Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.279333 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.280587 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7cddz" event={"ID":"87ccd0bd-19cf-436b-975a-01bb63ec761f","Type":"ContainerStarted","Data":"757dea3edc77c601e2116c1db3bfe321dc7651cb0d2d4eb218f5276a287f1118"} Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.283148 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" event={"ID":"9466ff4e-af97-45a4-a238-2a352002e378","Type":"ContainerStarted","Data":"cf06153db8ebb7b4d50659688efb2472c8b20202e70407266e2f991effe03186"} Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.285065 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-ldlj9" event={"ID":"cf6fac41-5b87-46d8-bc02-310e87d1b79c","Type":"ContainerStarted","Data":"ecc9abb1337ec521a251a4efea61dea60926ad4e0d0505a960708160c9728685"} Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.285389 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-ldlj9" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.286506 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr" event={"ID":"c7f3c09e-f40f-4517-81b4-9be7d5a4922c","Type":"ContainerStarted","Data":"95cc599e44c8cf1eb1ed3ad666c102161ccd58573a4f35d940ff37e7374a0ec1"} Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.288000 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87" event={"ID":"33dab54d-b1a9-4703-a6bb-01c9384ddcf1","Type":"ContainerStarted","Data":"0fb2023ac715a9b59f6cde0400e6143778cf523137e522c7fc94ccb0c7416552"} Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.288743 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-ldlj9" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.302465 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx" podStartSLOduration=2.371118897 podStartE2EDuration="16.302447557s" podCreationTimestamp="2026-02-20 00:22:37 +0000 UTC" firstStartedPulling="2026-02-20 00:22:38.151670067 +0000 UTC m=+740.130634349" lastFinishedPulling="2026-02-20 00:22:52.082998717 +0000 UTC m=+754.061963009" observedRunningTime="2026-02-20 00:22:53.299741204 +0000 UTC m=+755.278705506" watchObservedRunningTime="2026-02-20 00:22:53.302447557 +0000 UTC m=+755.281411849" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.321280 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-ldlj9" podStartSLOduration=2.550256664 podStartE2EDuration="16.321252767s" podCreationTimestamp="2026-02-20 00:22:37 +0000 UTC" firstStartedPulling="2026-02-20 00:22:38.317140284 +0000 UTC m=+740.296104566" lastFinishedPulling="2026-02-20 00:22:52.088136377 +0000 UTC m=+754.067100669" observedRunningTime="2026-02-20 00:22:53.318012169 +0000 UTC m=+755.296976461" watchObservedRunningTime="2026-02-20 00:22:53.321252767 +0000 UTC m=+755.300217059" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.357844 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" podStartSLOduration=2.772060717 podStartE2EDuration="16.357821919s" podCreationTimestamp="2026-02-20 00:22:37 +0000 UTC" firstStartedPulling="2026-02-20 00:22:38.474379416 +0000 UTC m=+740.453343698" lastFinishedPulling="2026-02-20 00:22:52.060140608 +0000 UTC m=+754.039104900" observedRunningTime="2026-02-20 00:22:53.35308805 +0000 UTC m=+755.332052342" watchObservedRunningTime="2026-02-20 00:22:53.357821919 +0000 UTC m=+755.336786211" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.379227 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7cddz" podStartSLOduration=2.336815787 podStartE2EDuration="16.379197148s" podCreationTimestamp="2026-02-20 00:22:37 +0000 UTC" firstStartedPulling="2026-02-20 00:22:38.039010103 +0000 UTC m=+740.017974395" lastFinishedPulling="2026-02-20 00:22:52.081391464 +0000 UTC m=+754.060355756" observedRunningTime="2026-02-20 00:22:53.375850657 +0000 UTC m=+755.354814949" watchObservedRunningTime="2026-02-20 00:22:53.379197148 +0000 UTC m=+755.358161440" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.397645 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr" podStartSLOduration=2.738407274 podStartE2EDuration="16.397611267s" podCreationTimestamp="2026-02-20 00:22:37 +0000 UTC" firstStartedPulling="2026-02-20 00:22:38.426673383 +0000 UTC m=+740.405637675" lastFinishedPulling="2026-02-20 00:22:52.085877386 +0000 UTC m=+754.064841668" observedRunningTime="2026-02-20 00:22:53.394520023 +0000 UTC m=+755.373484315" watchObservedRunningTime="2026-02-20 00:22:53.397611267 +0000 UTC m=+755.376575559" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.422568 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-6fb5cffb47-mmr88" podStartSLOduration=2.473073512 podStartE2EDuration="16.422524853s" podCreationTimestamp="2026-02-20 00:22:37 +0000 UTC" firstStartedPulling="2026-02-20 00:22:37.97912517 +0000 UTC m=+739.958089462" lastFinishedPulling="2026-02-20 00:22:51.928576511 +0000 UTC m=+753.907540803" observedRunningTime="2026-02-20 00:22:53.416135189 +0000 UTC m=+755.395099471" watchObservedRunningTime="2026-02-20 00:22:53.422524853 +0000 UTC m=+755.401489145" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.532227 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.532897 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerName="registry-server" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.532919 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerName="registry-server" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.532935 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerName="extract-utilities" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.532942 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerName="extract-utilities" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.532986 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerName="extract-content" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.532993 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerName="extract-content" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.533150 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="6239da79-2b3e-4abf-802f-e80bdec9bf1c" containerName="registry-server" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.538340 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.541117 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.541463 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.542768 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.543685 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.543923 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.544069 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.544319 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-jj9xr\"" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.544756 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.546425 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.559024 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.643724 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.643778 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.643819 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/171ea291-6d79-46a5-aa1a-02eb579d0774-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.643841 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.643868 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.643891 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.643923 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.643950 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.643986 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.644014 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.644056 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.644093 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.644121 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.644142 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.644178 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.746268 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.746497 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.746610 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.746637 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.746725 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.746840 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.746958 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747057 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747168 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/171ea291-6d79-46a5-aa1a-02eb579d0774-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747198 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747226 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747361 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747407 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747445 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747463 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747523 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747564 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747701 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.747971 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.748254 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.748274 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.748330 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.748501 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.764636 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.764728 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.765019 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.771416 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.773163 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.777419 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/171ea291-6d79-46a5-aa1a-02eb579d0774-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.797244 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/171ea291-6d79-46a5-aa1a-02eb579d0774-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"171ea291-6d79-46a5-aa1a-02eb579d0774\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:53 crc kubenswrapper[5119]: I0220 00:22:53.858888 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:22:54 crc kubenswrapper[5119]: I0220 00:22:54.089070 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 20 00:22:54 crc kubenswrapper[5119]: I0220 00:22:54.294477 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"171ea291-6d79-46a5-aa1a-02eb579d0774","Type":"ContainerStarted","Data":"43ce821b4197dc36526482f3abbd13c6d03b56a6b0336320881849db6a33183c"} Feb 20 00:22:59 crc kubenswrapper[5119]: I0220 00:22:59.351009 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87" event={"ID":"33dab54d-b1a9-4703-a6bb-01c9384ddcf1","Type":"ContainerStarted","Data":"ad62fa45d203ccd8d6cf94823121038fdcf1688c6b09c8a04c0135d405bcc7b2"} Feb 20 00:22:59 crc kubenswrapper[5119]: I0220 00:22:59.372937 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-fgh87" podStartSLOduration=11.838344685 podStartE2EDuration="17.37291836s" podCreationTimestamp="2026-02-20 00:22:42 +0000 UTC" firstStartedPulling="2026-02-20 00:22:52.394363629 +0000 UTC m=+754.373327921" lastFinishedPulling="2026-02-20 00:22:57.928937294 +0000 UTC m=+759.907901596" observedRunningTime="2026-02-20 00:22:59.372258482 +0000 UTC m=+761.351222784" watchObservedRunningTime="2026-02-20 00:22:59.37291836 +0000 UTC m=+761.351882652" Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.016601 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-hvxjc"] Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.027511 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-hvxjc"] Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.027688 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.029642 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.029677 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-vdn8r\"" Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.030484 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.199810 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31cacb6a-1bdb-47a3-a04b-e86aa109f295-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-hvxjc\" (UID: \"31cacb6a-1bdb-47a3-a04b-e86aa109f295\") " pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.200050 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tf2l\" (UniqueName: \"kubernetes.io/projected/31cacb6a-1bdb-47a3-a04b-e86aa109f295-kube-api-access-2tf2l\") pod \"cert-manager-webhook-597b96b99b-hvxjc\" (UID: \"31cacb6a-1bdb-47a3-a04b-e86aa109f295\") " pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.301854 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31cacb6a-1bdb-47a3-a04b-e86aa109f295-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-hvxjc\" (UID: \"31cacb6a-1bdb-47a3-a04b-e86aa109f295\") " pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.301970 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2tf2l\" (UniqueName: \"kubernetes.io/projected/31cacb6a-1bdb-47a3-a04b-e86aa109f295-kube-api-access-2tf2l\") pod \"cert-manager-webhook-597b96b99b-hvxjc\" (UID: \"31cacb6a-1bdb-47a3-a04b-e86aa109f295\") " pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.327041 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tf2l\" (UniqueName: \"kubernetes.io/projected/31cacb6a-1bdb-47a3-a04b-e86aa109f295-kube-api-access-2tf2l\") pod \"cert-manager-webhook-597b96b99b-hvxjc\" (UID: \"31cacb6a-1bdb-47a3-a04b-e86aa109f295\") " pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.328274 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31cacb6a-1bdb-47a3-a04b-e86aa109f295-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-hvxjc\" (UID: \"31cacb6a-1bdb-47a3-a04b-e86aa109f295\") " pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" Feb 20 00:23:02 crc kubenswrapper[5119]: I0220 00:23:02.360309 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" Feb 20 00:23:04 crc kubenswrapper[5119]: I0220 00:23:04.297656 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-bzhfk" Feb 20 00:23:05 crc kubenswrapper[5119]: I0220 00:23:05.508200 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-2ngdf"] Feb 20 00:23:05 crc kubenswrapper[5119]: I0220 00:23:05.514659 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-2ngdf" Feb 20 00:23:05 crc kubenswrapper[5119]: I0220 00:23:05.516717 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-kqkfc\"" Feb 20 00:23:05 crc kubenswrapper[5119]: I0220 00:23:05.524417 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-2ngdf"] Feb 20 00:23:05 crc kubenswrapper[5119]: I0220 00:23:05.555760 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqqkb\" (UniqueName: \"kubernetes.io/projected/06be9c2b-063d-431f-9500-06d071455834-kube-api-access-bqqkb\") pod \"cert-manager-cainjector-8966b78d4-2ngdf\" (UID: \"06be9c2b-063d-431f-9500-06d071455834\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-2ngdf" Feb 20 00:23:05 crc kubenswrapper[5119]: I0220 00:23:05.555906 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06be9c2b-063d-431f-9500-06d071455834-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-2ngdf\" (UID: \"06be9c2b-063d-431f-9500-06d071455834\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-2ngdf" Feb 20 00:23:05 crc kubenswrapper[5119]: I0220 00:23:05.657455 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06be9c2b-063d-431f-9500-06d071455834-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-2ngdf\" (UID: \"06be9c2b-063d-431f-9500-06d071455834\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-2ngdf" Feb 20 00:23:05 crc kubenswrapper[5119]: I0220 00:23:05.657570 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bqqkb\" (UniqueName: \"kubernetes.io/projected/06be9c2b-063d-431f-9500-06d071455834-kube-api-access-bqqkb\") pod \"cert-manager-cainjector-8966b78d4-2ngdf\" (UID: \"06be9c2b-063d-431f-9500-06d071455834\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-2ngdf" Feb 20 00:23:05 crc kubenswrapper[5119]: I0220 00:23:05.678003 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqqkb\" (UniqueName: \"kubernetes.io/projected/06be9c2b-063d-431f-9500-06d071455834-kube-api-access-bqqkb\") pod \"cert-manager-cainjector-8966b78d4-2ngdf\" (UID: \"06be9c2b-063d-431f-9500-06d071455834\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-2ngdf" Feb 20 00:23:05 crc kubenswrapper[5119]: I0220 00:23:05.682785 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06be9c2b-063d-431f-9500-06d071455834-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-2ngdf\" (UID: \"06be9c2b-063d-431f-9500-06d071455834\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-2ngdf" Feb 20 00:23:05 crc kubenswrapper[5119]: I0220 00:23:05.830409 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-2ngdf" Feb 20 00:23:06 crc kubenswrapper[5119]: I0220 00:23:06.353150 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-2ngdf"] Feb 20 00:23:06 crc kubenswrapper[5119]: W0220 00:23:06.363286 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06be9c2b_063d_431f_9500_06d071455834.slice/crio-eda717b8d0fb4e2dfa0a5e46e140dd5e2ae98f152eaf431eea679066480aa989 WatchSource:0}: Error finding container eda717b8d0fb4e2dfa0a5e46e140dd5e2ae98f152eaf431eea679066480aa989: Status 404 returned error can't find the container with id eda717b8d0fb4e2dfa0a5e46e140dd5e2ae98f152eaf431eea679066480aa989 Feb 20 00:23:06 crc kubenswrapper[5119]: I0220 00:23:06.394914 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-2ngdf" event={"ID":"06be9c2b-063d-431f-9500-06d071455834","Type":"ContainerStarted","Data":"eda717b8d0fb4e2dfa0a5e46e140dd5e2ae98f152eaf431eea679066480aa989"} Feb 20 00:23:06 crc kubenswrapper[5119]: I0220 00:23:06.397008 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"171ea291-6d79-46a5-aa1a-02eb579d0774","Type":"ContainerStarted","Data":"67e43ce7e2bbb0e5548d71599453c212545918932090c300b21e6265e17d806d"} Feb 20 00:23:06 crc kubenswrapper[5119]: I0220 00:23:06.453697 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-hvxjc"] Feb 20 00:23:06 crc kubenswrapper[5119]: W0220 00:23:06.457836 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31cacb6a_1bdb_47a3_a04b_e86aa109f295.slice/crio-011b65346a61d3b7ff07aa8a00ea25db1e803dd518e1708737c450f3beecf123 WatchSource:0}: Error finding container 011b65346a61d3b7ff07aa8a00ea25db1e803dd518e1708737c450f3beecf123: Status 404 returned error can't find the container with id 011b65346a61d3b7ff07aa8a00ea25db1e803dd518e1708737c450f3beecf123 Feb 20 00:23:06 crc kubenswrapper[5119]: I0220 00:23:06.737080 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 20 00:23:06 crc kubenswrapper[5119]: I0220 00:23:06.771367 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 20 00:23:07 crc kubenswrapper[5119]: I0220 00:23:07.408107 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" event={"ID":"31cacb6a-1bdb-47a3-a04b-e86aa109f295","Type":"ContainerStarted","Data":"011b65346a61d3b7ff07aa8a00ea25db1e803dd518e1708737c450f3beecf123"} Feb 20 00:23:08 crc kubenswrapper[5119]: I0220 00:23:08.415174 5119 generic.go:358] "Generic (PLEG): container finished" podID="171ea291-6d79-46a5-aa1a-02eb579d0774" containerID="67e43ce7e2bbb0e5548d71599453c212545918932090c300b21e6265e17d806d" exitCode=0 Feb 20 00:23:08 crc kubenswrapper[5119]: I0220 00:23:08.415555 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"171ea291-6d79-46a5-aa1a-02eb579d0774","Type":"ContainerDied","Data":"67e43ce7e2bbb0e5548d71599453c212545918932090c300b21e6265e17d806d"} Feb 20 00:23:09 crc kubenswrapper[5119]: I0220 00:23:09.425988 5119 generic.go:358] "Generic (PLEG): container finished" podID="171ea291-6d79-46a5-aa1a-02eb579d0774" containerID="a978ea54912e62c8a7d21ec13cd38543c483cb7e9c3d2f6fa20d6db12741f1a8" exitCode=0 Feb 20 00:23:09 crc kubenswrapper[5119]: I0220 00:23:09.426515 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"171ea291-6d79-46a5-aa1a-02eb579d0774","Type":"ContainerDied","Data":"a978ea54912e62c8a7d21ec13cd38543c483cb7e9c3d2f6fa20d6db12741f1a8"} Feb 20 00:23:11 crc kubenswrapper[5119]: I0220 00:23:11.442363 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" event={"ID":"31cacb6a-1bdb-47a3-a04b-e86aa109f295","Type":"ContainerStarted","Data":"6fc4961a7c2ee69ae4211cd9ed7be1a8a6455dff22321f78cf8d2579c62fb19b"} Feb 20 00:23:11 crc kubenswrapper[5119]: I0220 00:23:11.442947 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" Feb 20 00:23:11 crc kubenswrapper[5119]: I0220 00:23:11.444751 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"171ea291-6d79-46a5-aa1a-02eb579d0774","Type":"ContainerStarted","Data":"ed0f638ae55a41433c970d9a4f7003141b925c281866e4ade86bf7f9e3ea7c44"} Feb 20 00:23:11 crc kubenswrapper[5119]: I0220 00:23:11.444920 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:23:11 crc kubenswrapper[5119]: I0220 00:23:11.446296 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-2ngdf" event={"ID":"06be9c2b-063d-431f-9500-06d071455834","Type":"ContainerStarted","Data":"c3b1f5a7e6dbd069370cf10d0ff4be6fd8a36e649ca64b5551ae2e2457ecd4e8"} Feb 20 00:23:11 crc kubenswrapper[5119]: I0220 00:23:11.471961 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" podStartSLOduration=6.159735853 podStartE2EDuration="10.471928026s" podCreationTimestamp="2026-02-20 00:23:01 +0000 UTC" firstStartedPulling="2026-02-20 00:23:06.469784878 +0000 UTC m=+768.448749170" lastFinishedPulling="2026-02-20 00:23:10.781977051 +0000 UTC m=+772.760941343" observedRunningTime="2026-02-20 00:23:11.463812276 +0000 UTC m=+773.442776618" watchObservedRunningTime="2026-02-20 00:23:11.471928026 +0000 UTC m=+773.450892348" Feb 20 00:23:11 crc kubenswrapper[5119]: I0220 00:23:11.502266 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.548055286 podStartE2EDuration="18.502237848s" podCreationTimestamp="2026-02-20 00:22:53 +0000 UTC" firstStartedPulling="2026-02-20 00:22:54.102234569 +0000 UTC m=+756.081198851" lastFinishedPulling="2026-02-20 00:23:06.056417121 +0000 UTC m=+768.035381413" observedRunningTime="2026-02-20 00:23:11.501121377 +0000 UTC m=+773.480085669" watchObservedRunningTime="2026-02-20 00:23:11.502237848 +0000 UTC m=+773.481202180" Feb 20 00:23:11 crc kubenswrapper[5119]: I0220 00:23:11.522461 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-2ngdf" podStartSLOduration=2.113713704 podStartE2EDuration="6.522425535s" podCreationTimestamp="2026-02-20 00:23:05 +0000 UTC" firstStartedPulling="2026-02-20 00:23:06.367022971 +0000 UTC m=+768.345987273" lastFinishedPulling="2026-02-20 00:23:10.775734812 +0000 UTC m=+772.754699104" observedRunningTime="2026-02-20 00:23:11.516464873 +0000 UTC m=+773.495429165" watchObservedRunningTime="2026-02-20 00:23:11.522425535 +0000 UTC m=+773.501389867" Feb 20 00:23:12 crc kubenswrapper[5119]: I0220 00:23:12.160394 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:23:12 crc kubenswrapper[5119]: I0220 00:23:12.160475 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:23:13 crc kubenswrapper[5119]: I0220 00:23:13.487952 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-lcgw2"] Feb 20 00:23:13 crc kubenswrapper[5119]: I0220 00:23:13.494097 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-lcgw2" Feb 20 00:23:13 crc kubenswrapper[5119]: I0220 00:23:13.501386 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-7fjv7\"" Feb 20 00:23:13 crc kubenswrapper[5119]: I0220 00:23:13.503822 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-lcgw2"] Feb 20 00:23:13 crc kubenswrapper[5119]: I0220 00:23:13.580899 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l49md\" (UniqueName: \"kubernetes.io/projected/ea1d85dc-0434-4191-b4dc-be938aab1430-kube-api-access-l49md\") pod \"cert-manager-759f64656b-lcgw2\" (UID: \"ea1d85dc-0434-4191-b4dc-be938aab1430\") " pod="cert-manager/cert-manager-759f64656b-lcgw2" Feb 20 00:23:13 crc kubenswrapper[5119]: I0220 00:23:13.580948 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea1d85dc-0434-4191-b4dc-be938aab1430-bound-sa-token\") pod \"cert-manager-759f64656b-lcgw2\" (UID: \"ea1d85dc-0434-4191-b4dc-be938aab1430\") " pod="cert-manager/cert-manager-759f64656b-lcgw2" Feb 20 00:23:13 crc kubenswrapper[5119]: I0220 00:23:13.682370 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l49md\" (UniqueName: \"kubernetes.io/projected/ea1d85dc-0434-4191-b4dc-be938aab1430-kube-api-access-l49md\") pod \"cert-manager-759f64656b-lcgw2\" (UID: \"ea1d85dc-0434-4191-b4dc-be938aab1430\") " pod="cert-manager/cert-manager-759f64656b-lcgw2" Feb 20 00:23:13 crc kubenswrapper[5119]: I0220 00:23:13.682437 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea1d85dc-0434-4191-b4dc-be938aab1430-bound-sa-token\") pod \"cert-manager-759f64656b-lcgw2\" (UID: \"ea1d85dc-0434-4191-b4dc-be938aab1430\") " pod="cert-manager/cert-manager-759f64656b-lcgw2" Feb 20 00:23:13 crc kubenswrapper[5119]: I0220 00:23:13.710232 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea1d85dc-0434-4191-b4dc-be938aab1430-bound-sa-token\") pod \"cert-manager-759f64656b-lcgw2\" (UID: \"ea1d85dc-0434-4191-b4dc-be938aab1430\") " pod="cert-manager/cert-manager-759f64656b-lcgw2" Feb 20 00:23:13 crc kubenswrapper[5119]: I0220 00:23:13.715885 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l49md\" (UniqueName: \"kubernetes.io/projected/ea1d85dc-0434-4191-b4dc-be938aab1430-kube-api-access-l49md\") pod \"cert-manager-759f64656b-lcgw2\" (UID: \"ea1d85dc-0434-4191-b4dc-be938aab1430\") " pod="cert-manager/cert-manager-759f64656b-lcgw2" Feb 20 00:23:13 crc kubenswrapper[5119]: I0220 00:23:13.815359 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-lcgw2" Feb 20 00:23:14 crc kubenswrapper[5119]: I0220 00:23:14.053402 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-lcgw2"] Feb 20 00:23:14 crc kubenswrapper[5119]: W0220 00:23:14.059743 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea1d85dc_0434_4191_b4dc_be938aab1430.slice/crio-3359bbf16b2f816cd94b9accbef06f6881f5d9bde64647b7f5851f8df61c0601 WatchSource:0}: Error finding container 3359bbf16b2f816cd94b9accbef06f6881f5d9bde64647b7f5851f8df61c0601: Status 404 returned error can't find the container with id 3359bbf16b2f816cd94b9accbef06f6881f5d9bde64647b7f5851f8df61c0601 Feb 20 00:23:14 crc kubenswrapper[5119]: I0220 00:23:14.467996 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-lcgw2" event={"ID":"ea1d85dc-0434-4191-b4dc-be938aab1430","Type":"ContainerStarted","Data":"57ced42702a73f3ffcb59445e5cc0a25288b57c1eb2d43f1000f393087dd5913"} Feb 20 00:23:14 crc kubenswrapper[5119]: I0220 00:23:14.468348 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-lcgw2" event={"ID":"ea1d85dc-0434-4191-b4dc-be938aab1430","Type":"ContainerStarted","Data":"3359bbf16b2f816cd94b9accbef06f6881f5d9bde64647b7f5851f8df61c0601"} Feb 20 00:23:14 crc kubenswrapper[5119]: I0220 00:23:14.491979 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-lcgw2" podStartSLOduration=1.4919555789999999 podStartE2EDuration="1.491955579s" podCreationTimestamp="2026-02-20 00:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:23:14.48388516 +0000 UTC m=+776.462849482" watchObservedRunningTime="2026-02-20 00:23:14.491955579 +0000 UTC m=+776.470919911" Feb 20 00:23:17 crc kubenswrapper[5119]: I0220 00:23:17.455105 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-hvxjc" Feb 20 00:23:22 crc kubenswrapper[5119]: I0220 00:23:22.602501 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="171ea291-6d79-46a5-aa1a-02eb579d0774" containerName="elasticsearch" probeResult="failure" output=< Feb 20 00:23:22 crc kubenswrapper[5119]: {"timestamp": "2026-02-20T00:23:22+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 20 00:23:22 crc kubenswrapper[5119]: > Feb 20 00:23:27 crc kubenswrapper[5119]: I0220 00:23:27.802532 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:23:42 crc kubenswrapper[5119]: I0220 00:23:42.160866 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:23:42 crc kubenswrapper[5119]: I0220 00:23:42.161793 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:23:42 crc kubenswrapper[5119]: I0220 00:23:42.161879 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:23:42 crc kubenswrapper[5119]: I0220 00:23:42.162837 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e692bfc0f8e3640cdfb629db9ce0f6fdd7db4e721f07aacfb3653d9f3057c7c"} pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:23:42 crc kubenswrapper[5119]: I0220 00:23:42.162981 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" containerID="cri-o://6e692bfc0f8e3640cdfb629db9ce0f6fdd7db4e721f07aacfb3653d9f3057c7c" gracePeriod=600 Feb 20 00:23:42 crc kubenswrapper[5119]: I0220 00:23:42.702844 5119 generic.go:358] "Generic (PLEG): container finished" podID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerID="6e692bfc0f8e3640cdfb629db9ce0f6fdd7db4e721f07aacfb3653d9f3057c7c" exitCode=0 Feb 20 00:23:42 crc kubenswrapper[5119]: I0220 00:23:42.702930 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerDied","Data":"6e692bfc0f8e3640cdfb629db9ce0f6fdd7db4e721f07aacfb3653d9f3057c7c"} Feb 20 00:23:42 crc kubenswrapper[5119]: I0220 00:23:42.703345 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerStarted","Data":"89838faa3e23ccc0655e0096613e091fc8decdd475bfdc257b396ab6343fa8f7"} Feb 20 00:23:42 crc kubenswrapper[5119]: I0220 00:23:42.703402 5119 scope.go:117] "RemoveContainer" containerID="eb031711d14c34fef15b7fa26c19329bf458b6e44e95d8bdc0966d1b83c33a00" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.088866 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.096094 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.098486 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.099236 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-catalog-configmap-partition-1\"" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.183005 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/2349fc8f-393f-4f48-91e7-3d876661c850-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"2349fc8f-393f-4f48-91e7-3d876661c850\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.183509 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9tt7\" (UniqueName: \"kubernetes.io/projected/2349fc8f-393f-4f48-91e7-3d876661c850-kube-api-access-f9tt7\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"2349fc8f-393f-4f48-91e7-3d876661c850\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.183617 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/2349fc8f-393f-4f48-91e7-3d876661c850-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"2349fc8f-393f-4f48-91e7-3d876661c850\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.285210 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/2349fc8f-393f-4f48-91e7-3d876661c850-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"2349fc8f-393f-4f48-91e7-3d876661c850\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.285343 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/2349fc8f-393f-4f48-91e7-3d876661c850-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"2349fc8f-393f-4f48-91e7-3d876661c850\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.285510 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9tt7\" (UniqueName: \"kubernetes.io/projected/2349fc8f-393f-4f48-91e7-3d876661c850-kube-api-access-f9tt7\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"2349fc8f-393f-4f48-91e7-3d876661c850\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.286383 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/2349fc8f-393f-4f48-91e7-3d876661c850-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"2349fc8f-393f-4f48-91e7-3d876661c850\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.286991 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/2349fc8f-393f-4f48-91e7-3d876661c850-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"2349fc8f-393f-4f48-91e7-3d876661c850\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.321240 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9tt7\" (UniqueName: \"kubernetes.io/projected/2349fc8f-393f-4f48-91e7-3d876661c850-kube-api-access-f9tt7\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"2349fc8f-393f-4f48-91e7-3d876661c850\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.418993 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 20 00:23:50 crc kubenswrapper[5119]: I0220 00:23:50.904742 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 20 00:23:51 crc kubenswrapper[5119]: I0220 00:23:51.804958 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"2349fc8f-393f-4f48-91e7-3d876661c850","Type":"ContainerStarted","Data":"dfdec9a0fa655bac97ecfa28508e48250c942e41583929dfdc37ec20b7f09abc"} Feb 20 00:23:56 crc kubenswrapper[5119]: I0220 00:23:56.843857 5119 generic.go:358] "Generic (PLEG): container finished" podID="2349fc8f-393f-4f48-91e7-3d876661c850" containerID="4f48a6375f0284a8cea2c4ece602b291c7fea20eeec80657df855a1230c9378f" exitCode=0 Feb 20 00:23:56 crc kubenswrapper[5119]: I0220 00:23:56.843964 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"2349fc8f-393f-4f48-91e7-3d876661c850","Type":"ContainerDied","Data":"4f48a6375f0284a8cea2c4ece602b291c7fea20eeec80657df855a1230c9378f"} Feb 20 00:23:59 crc kubenswrapper[5119]: I0220 00:23:59.872343 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"2349fc8f-393f-4f48-91e7-3d876661c850","Type":"ContainerStarted","Data":"3c943ac159c0193c4b11dd1e59d3ae1540484ae1cebdff087a48d303dc702143"} Feb 20 00:23:59 crc kubenswrapper[5119]: I0220 00:23:59.900506 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" podStartSLOduration=1.422303018 podStartE2EDuration="9.900479582s" podCreationTimestamp="2026-02-20 00:23:50 +0000 UTC" firstStartedPulling="2026-02-20 00:23:50.914857012 +0000 UTC m=+812.893821294" lastFinishedPulling="2026-02-20 00:23:59.393033566 +0000 UTC m=+821.371997858" observedRunningTime="2026-02-20 00:23:59.891199422 +0000 UTC m=+821.870163714" watchObservedRunningTime="2026-02-20 00:23:59.900479582 +0000 UTC m=+821.879443904" Feb 20 00:24:00 crc kubenswrapper[5119]: I0220 00:24:00.149319 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525784-nmmdc"] Feb 20 00:24:00 crc kubenswrapper[5119]: I0220 00:24:00.154244 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525784-nmmdc" Feb 20 00:24:00 crc kubenswrapper[5119]: I0220 00:24:00.158029 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-nmc85\"" Feb 20 00:24:00 crc kubenswrapper[5119]: I0220 00:24:00.158585 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:24:00 crc kubenswrapper[5119]: I0220 00:24:00.158879 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:24:00 crc kubenswrapper[5119]: I0220 00:24:00.164398 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525784-nmmdc"] Feb 20 00:24:00 crc kubenswrapper[5119]: I0220 00:24:00.325767 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m66p\" (UniqueName: \"kubernetes.io/projected/296ec321-10e7-4337-aa40-f6132860ee9e-kube-api-access-9m66p\") pod \"auto-csr-approver-29525784-nmmdc\" (UID: \"296ec321-10e7-4337-aa40-f6132860ee9e\") " pod="openshift-infra/auto-csr-approver-29525784-nmmdc" Feb 20 00:24:00 crc kubenswrapper[5119]: I0220 00:24:00.427045 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9m66p\" (UniqueName: \"kubernetes.io/projected/296ec321-10e7-4337-aa40-f6132860ee9e-kube-api-access-9m66p\") pod \"auto-csr-approver-29525784-nmmdc\" (UID: \"296ec321-10e7-4337-aa40-f6132860ee9e\") " pod="openshift-infra/auto-csr-approver-29525784-nmmdc" Feb 20 00:24:00 crc kubenswrapper[5119]: I0220 00:24:00.448325 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m66p\" (UniqueName: \"kubernetes.io/projected/296ec321-10e7-4337-aa40-f6132860ee9e-kube-api-access-9m66p\") pod \"auto-csr-approver-29525784-nmmdc\" (UID: \"296ec321-10e7-4337-aa40-f6132860ee9e\") " pod="openshift-infra/auto-csr-approver-29525784-nmmdc" Feb 20 00:24:00 crc kubenswrapper[5119]: I0220 00:24:00.481822 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525784-nmmdc" Feb 20 00:24:01 crc kubenswrapper[5119]: I0220 00:24:01.523860 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525784-nmmdc"] Feb 20 00:24:01 crc kubenswrapper[5119]: W0220 00:24:01.535104 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod296ec321_10e7_4337_aa40_f6132860ee9e.slice/crio-3bb8b1e3a86b9c23e97cda64fd6ab71067c76fd9dacda6972693b5c33046c6de WatchSource:0}: Error finding container 3bb8b1e3a86b9c23e97cda64fd6ab71067c76fd9dacda6972693b5c33046c6de: Status 404 returned error can't find the container with id 3bb8b1e3a86b9c23e97cda64fd6ab71067c76fd9dacda6972693b5c33046c6de Feb 20 00:24:01 crc kubenswrapper[5119]: I0220 00:24:01.888415 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525784-nmmdc" event={"ID":"296ec321-10e7-4337-aa40-f6132860ee9e","Type":"ContainerStarted","Data":"3bb8b1e3a86b9c23e97cda64fd6ab71067c76fd9dacda6972693b5c33046c6de"} Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.169536 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw"] Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.179998 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.184154 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw"] Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.267398 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm92z\" (UniqueName: \"kubernetes.io/projected/0c90d69f-473c-4d4e-827c-4538c713bb2e-kube-api-access-xm92z\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.267773 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.267916 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.369477 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.369660 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.369722 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xm92z\" (UniqueName: \"kubernetes.io/projected/0c90d69f-473c-4d4e-827c-4538c713bb2e-kube-api-access-xm92z\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.370125 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.370159 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.399878 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm92z\" (UniqueName: \"kubernetes.io/projected/0c90d69f-473c-4d4e-827c-4538c713bb2e-kube-api-access-xm92z\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.509302 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.778391 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw"] Feb 20 00:24:02 crc kubenswrapper[5119]: W0220 00:24:02.780613 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c90d69f_473c_4d4e_827c_4538c713bb2e.slice/crio-adb8509acc5abda5e159180f559acf5a72a70bf17ae1a950d6e977ac780a05c9 WatchSource:0}: Error finding container adb8509acc5abda5e159180f559acf5a72a70bf17ae1a950d6e977ac780a05c9: Status 404 returned error can't find the container with id adb8509acc5abda5e159180f559acf5a72a70bf17ae1a950d6e977ac780a05c9 Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.898251 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525784-nmmdc" event={"ID":"296ec321-10e7-4337-aa40-f6132860ee9e","Type":"ContainerStarted","Data":"f1509f8373951996f6a4e8a67ad71989aa3905c0adae8a4c142b69acf39b522f"} Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.900376 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" event={"ID":"0c90d69f-473c-4d4e-827c-4538c713bb2e","Type":"ContainerStarted","Data":"adb8509acc5abda5e159180f559acf5a72a70bf17ae1a950d6e977ac780a05c9"} Feb 20 00:24:02 crc kubenswrapper[5119]: I0220 00:24:02.912774 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29525784-nmmdc" podStartSLOduration=2.030688048 podStartE2EDuration="2.912757923s" podCreationTimestamp="2026-02-20 00:24:00 +0000 UTC" firstStartedPulling="2026-02-20 00:24:01.537852027 +0000 UTC m=+823.516816319" lastFinishedPulling="2026-02-20 00:24:02.419921852 +0000 UTC m=+824.398886194" observedRunningTime="2026-02-20 00:24:02.909400402 +0000 UTC m=+824.888364704" watchObservedRunningTime="2026-02-20 00:24:02.912757923 +0000 UTC m=+824.891722216" Feb 20 00:24:03 crc kubenswrapper[5119]: I0220 00:24:03.916233 5119 generic.go:358] "Generic (PLEG): container finished" podID="0c90d69f-473c-4d4e-827c-4538c713bb2e" containerID="79030ef9e0f68105bd5be6f914169280b54da06d23fc0fa5622dca0a5fd7abb2" exitCode=0 Feb 20 00:24:03 crc kubenswrapper[5119]: I0220 00:24:03.916490 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" event={"ID":"0c90d69f-473c-4d4e-827c-4538c713bb2e","Type":"ContainerDied","Data":"79030ef9e0f68105bd5be6f914169280b54da06d23fc0fa5622dca0a5fd7abb2"} Feb 20 00:24:03 crc kubenswrapper[5119]: I0220 00:24:03.920482 5119 generic.go:358] "Generic (PLEG): container finished" podID="296ec321-10e7-4337-aa40-f6132860ee9e" containerID="f1509f8373951996f6a4e8a67ad71989aa3905c0adae8a4c142b69acf39b522f" exitCode=0 Feb 20 00:24:03 crc kubenswrapper[5119]: I0220 00:24:03.920640 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525784-nmmdc" event={"ID":"296ec321-10e7-4337-aa40-f6132860ee9e","Type":"ContainerDied","Data":"f1509f8373951996f6a4e8a67ad71989aa3905c0adae8a4c142b69acf39b522f"} Feb 20 00:24:04 crc kubenswrapper[5119]: I0220 00:24:04.936247 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" event={"ID":"0c90d69f-473c-4d4e-827c-4538c713bb2e","Type":"ContainerStarted","Data":"0111f6e3523fe9fdaee3f428b6db91fa2c2b87da27b2ad8b9dcd65e6778f1d8b"} Feb 20 00:24:05 crc kubenswrapper[5119]: I0220 00:24:05.216152 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525784-nmmdc" Feb 20 00:24:05 crc kubenswrapper[5119]: I0220 00:24:05.318722 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m66p\" (UniqueName: \"kubernetes.io/projected/296ec321-10e7-4337-aa40-f6132860ee9e-kube-api-access-9m66p\") pod \"296ec321-10e7-4337-aa40-f6132860ee9e\" (UID: \"296ec321-10e7-4337-aa40-f6132860ee9e\") " Feb 20 00:24:05 crc kubenswrapper[5119]: I0220 00:24:05.338328 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296ec321-10e7-4337-aa40-f6132860ee9e-kube-api-access-9m66p" (OuterVolumeSpecName: "kube-api-access-9m66p") pod "296ec321-10e7-4337-aa40-f6132860ee9e" (UID: "296ec321-10e7-4337-aa40-f6132860ee9e"). InnerVolumeSpecName "kube-api-access-9m66p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:24:05 crc kubenswrapper[5119]: I0220 00:24:05.421526 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9m66p\" (UniqueName: \"kubernetes.io/projected/296ec321-10e7-4337-aa40-f6132860ee9e-kube-api-access-9m66p\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:05 crc kubenswrapper[5119]: I0220 00:24:05.948388 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525784-nmmdc" event={"ID":"296ec321-10e7-4337-aa40-f6132860ee9e","Type":"ContainerDied","Data":"3bb8b1e3a86b9c23e97cda64fd6ab71067c76fd9dacda6972693b5c33046c6de"} Feb 20 00:24:05 crc kubenswrapper[5119]: I0220 00:24:05.948467 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bb8b1e3a86b9c23e97cda64fd6ab71067c76fd9dacda6972693b5c33046c6de" Feb 20 00:24:05 crc kubenswrapper[5119]: I0220 00:24:05.948623 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525784-nmmdc" Feb 20 00:24:05 crc kubenswrapper[5119]: I0220 00:24:05.951467 5119 generic.go:358] "Generic (PLEG): container finished" podID="0c90d69f-473c-4d4e-827c-4538c713bb2e" containerID="0111f6e3523fe9fdaee3f428b6db91fa2c2b87da27b2ad8b9dcd65e6778f1d8b" exitCode=0 Feb 20 00:24:05 crc kubenswrapper[5119]: I0220 00:24:05.951606 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" event={"ID":"0c90d69f-473c-4d4e-827c-4538c713bb2e","Type":"ContainerDied","Data":"0111f6e3523fe9fdaee3f428b6db91fa2c2b87da27b2ad8b9dcd65e6778f1d8b"} Feb 20 00:24:06 crc kubenswrapper[5119]: I0220 00:24:06.013782 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525778-pvrq6"] Feb 20 00:24:06 crc kubenswrapper[5119]: I0220 00:24:06.019398 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525778-pvrq6"] Feb 20 00:24:06 crc kubenswrapper[5119]: I0220 00:24:06.868697 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e246885f-c45f-4a8a-879d-27add555cb0b" path="/var/lib/kubelet/pods/e246885f-c45f-4a8a-879d-27add555cb0b/volumes" Feb 20 00:24:06 crc kubenswrapper[5119]: I0220 00:24:06.964509 5119 generic.go:358] "Generic (PLEG): container finished" podID="0c90d69f-473c-4d4e-827c-4538c713bb2e" containerID="a8df3426216b21151dbc7fa091d8cd115059fb8f0c457fdd00e5b885615af971" exitCode=0 Feb 20 00:24:06 crc kubenswrapper[5119]: I0220 00:24:06.964628 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" event={"ID":"0c90d69f-473c-4d4e-827c-4538c713bb2e","Type":"ContainerDied","Data":"a8df3426216b21151dbc7fa091d8cd115059fb8f0c457fdd00e5b885615af971"} Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.260226 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.365220 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm92z\" (UniqueName: \"kubernetes.io/projected/0c90d69f-473c-4d4e-827c-4538c713bb2e-kube-api-access-xm92z\") pod \"0c90d69f-473c-4d4e-827c-4538c713bb2e\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.365438 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-bundle\") pod \"0c90d69f-473c-4d4e-827c-4538c713bb2e\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.365496 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-util\") pod \"0c90d69f-473c-4d4e-827c-4538c713bb2e\" (UID: \"0c90d69f-473c-4d4e-827c-4538c713bb2e\") " Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.366322 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-bundle" (OuterVolumeSpecName: "bundle") pod "0c90d69f-473c-4d4e-827c-4538c713bb2e" (UID: "0c90d69f-473c-4d4e-827c-4538c713bb2e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.376531 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c90d69f-473c-4d4e-827c-4538c713bb2e-kube-api-access-xm92z" (OuterVolumeSpecName: "kube-api-access-xm92z") pod "0c90d69f-473c-4d4e-827c-4538c713bb2e" (UID: "0c90d69f-473c-4d4e-827c-4538c713bb2e"). InnerVolumeSpecName "kube-api-access-xm92z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.467382 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.467414 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xm92z\" (UniqueName: \"kubernetes.io/projected/0c90d69f-473c-4d4e-827c-4538c713bb2e-kube-api-access-xm92z\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.661529 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-util" (OuterVolumeSpecName: "util") pod "0c90d69f-473c-4d4e-827c-4538c713bb2e" (UID: "0c90d69f-473c-4d4e-827c-4538c713bb2e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.670793 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0c90d69f-473c-4d4e-827c-4538c713bb2e-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.985129 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.985158 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661jklkw" event={"ID":"0c90d69f-473c-4d4e-827c-4538c713bb2e","Type":"ContainerDied","Data":"adb8509acc5abda5e159180f559acf5a72a70bf17ae1a950d6e977ac780a05c9"} Feb 20 00:24:08 crc kubenswrapper[5119]: I0220 00:24:08.985221 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adb8509acc5abda5e159180f559acf5a72a70bf17ae1a950d6e977ac780a05c9" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.232277 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-zjbls"] Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.234005 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c90d69f-473c-4d4e-827c-4538c713bb2e" containerName="util" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.234028 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c90d69f-473c-4d4e-827c-4538c713bb2e" containerName="util" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.234060 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="296ec321-10e7-4337-aa40-f6132860ee9e" containerName="oc" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.234072 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="296ec321-10e7-4337-aa40-f6132860ee9e" containerName="oc" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.234098 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c90d69f-473c-4d4e-827c-4538c713bb2e" containerName="pull" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.234113 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c90d69f-473c-4d4e-827c-4538c713bb2e" containerName="pull" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.234131 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c90d69f-473c-4d4e-827c-4538c713bb2e" containerName="extract" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.234144 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c90d69f-473c-4d4e-827c-4538c713bb2e" containerName="extract" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.234338 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="296ec321-10e7-4337-aa40-f6132860ee9e" containerName="oc" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.234362 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="0c90d69f-473c-4d4e-827c-4538c713bb2e" containerName="extract" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.250684 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-zjbls"] Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.250871 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-zjbls" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.253601 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-qqjmc\"" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.334701 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrxb\" (UniqueName: \"kubernetes.io/projected/eadb8db0-384b-4ee4-9857-ebd4c7e7600b-kube-api-access-bjrxb\") pod \"smart-gateway-operator-97b85656c-zjbls\" (UID: \"eadb8db0-384b-4ee4-9857-ebd4c7e7600b\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zjbls" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.334796 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/eadb8db0-384b-4ee4-9857-ebd4c7e7600b-runner\") pod \"smart-gateway-operator-97b85656c-zjbls\" (UID: \"eadb8db0-384b-4ee4-9857-ebd4c7e7600b\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zjbls" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.435762 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrxb\" (UniqueName: \"kubernetes.io/projected/eadb8db0-384b-4ee4-9857-ebd4c7e7600b-kube-api-access-bjrxb\") pod \"smart-gateway-operator-97b85656c-zjbls\" (UID: \"eadb8db0-384b-4ee4-9857-ebd4c7e7600b\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zjbls" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.435818 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/eadb8db0-384b-4ee4-9857-ebd4c7e7600b-runner\") pod \"smart-gateway-operator-97b85656c-zjbls\" (UID: \"eadb8db0-384b-4ee4-9857-ebd4c7e7600b\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zjbls" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.436210 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/eadb8db0-384b-4ee4-9857-ebd4c7e7600b-runner\") pod \"smart-gateway-operator-97b85656c-zjbls\" (UID: \"eadb8db0-384b-4ee4-9857-ebd4c7e7600b\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zjbls" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.458619 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrxb\" (UniqueName: \"kubernetes.io/projected/eadb8db0-384b-4ee4-9857-ebd4c7e7600b-kube-api-access-bjrxb\") pod \"smart-gateway-operator-97b85656c-zjbls\" (UID: \"eadb8db0-384b-4ee4-9857-ebd4c7e7600b\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zjbls" Feb 20 00:24:12 crc kubenswrapper[5119]: I0220 00:24:12.583522 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-zjbls" Feb 20 00:24:13 crc kubenswrapper[5119]: I0220 00:24:13.015667 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-zjbls"] Feb 20 00:24:13 crc kubenswrapper[5119]: W0220 00:24:13.020855 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeadb8db0_384b_4ee4_9857_ebd4c7e7600b.slice/crio-13be18ac7a23ab4855cf71fe57f9c2a3d986e2526443969bf0ce97475fbcddd3 WatchSource:0}: Error finding container 13be18ac7a23ab4855cf71fe57f9c2a3d986e2526443969bf0ce97475fbcddd3: Status 404 returned error can't find the container with id 13be18ac7a23ab4855cf71fe57f9c2a3d986e2526443969bf0ce97475fbcddd3 Feb 20 00:24:14 crc kubenswrapper[5119]: I0220 00:24:14.021566 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-zjbls" event={"ID":"eadb8db0-384b-4ee4-9857-ebd4c7e7600b","Type":"ContainerStarted","Data":"13be18ac7a23ab4855cf71fe57f9c2a3d986e2526443969bf0ce97475fbcddd3"} Feb 20 00:24:19 crc kubenswrapper[5119]: I0220 00:24:19.617303 5119 scope.go:117] "RemoveContainer" containerID="c579104b50b04386a0930e094cc8fb30b667dfc6862c0c78aee0860c53152c04" Feb 20 00:24:29 crc kubenswrapper[5119]: I0220 00:24:29.164986 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-zjbls" event={"ID":"eadb8db0-384b-4ee4-9857-ebd4c7e7600b","Type":"ContainerStarted","Data":"aef38e7b646afa8152207e8661d0660ff528c2ae6b843c3dee035fe02c6a3c17"} Feb 20 00:24:29 crc kubenswrapper[5119]: I0220 00:24:29.200293 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-97b85656c-zjbls" podStartSLOduration=1.395783638 podStartE2EDuration="17.200265503s" podCreationTimestamp="2026-02-20 00:24:12 +0000 UTC" firstStartedPulling="2026-02-20 00:24:13.02194499 +0000 UTC m=+835.000909282" lastFinishedPulling="2026-02-20 00:24:28.826426845 +0000 UTC m=+850.805391147" observedRunningTime="2026-02-20 00:24:29.190426847 +0000 UTC m=+851.169391169" watchObservedRunningTime="2026-02-20 00:24:29.200265503 +0000 UTC m=+851.179229825" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.515120 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.536623 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.536765 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.539846 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-catalog-configmap-partition-1\"" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.648621 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/b6056c06-aaca-4273-9f3a-f729715e17b7-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"b6056c06-aaca-4273-9f3a-f729715e17b7\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.648754 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/b6056c06-aaca-4273-9f3a-f729715e17b7-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"b6056c06-aaca-4273-9f3a-f729715e17b7\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.648808 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbcgp\" (UniqueName: \"kubernetes.io/projected/b6056c06-aaca-4273-9f3a-f729715e17b7-kube-api-access-xbcgp\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"b6056c06-aaca-4273-9f3a-f729715e17b7\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.750372 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xbcgp\" (UniqueName: \"kubernetes.io/projected/b6056c06-aaca-4273-9f3a-f729715e17b7-kube-api-access-xbcgp\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"b6056c06-aaca-4273-9f3a-f729715e17b7\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.750522 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/b6056c06-aaca-4273-9f3a-f729715e17b7-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"b6056c06-aaca-4273-9f3a-f729715e17b7\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.750706 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/b6056c06-aaca-4273-9f3a-f729715e17b7-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"b6056c06-aaca-4273-9f3a-f729715e17b7\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.751717 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/b6056c06-aaca-4273-9f3a-f729715e17b7-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"b6056c06-aaca-4273-9f3a-f729715e17b7\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.752753 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/b6056c06-aaca-4273-9f3a-f729715e17b7-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"b6056c06-aaca-4273-9f3a-f729715e17b7\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.780698 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbcgp\" (UniqueName: \"kubernetes.io/projected/b6056c06-aaca-4273-9f3a-f729715e17b7-kube-api-access-xbcgp\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"b6056c06-aaca-4273-9f3a-f729715e17b7\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 20 00:24:45 crc kubenswrapper[5119]: I0220 00:24:45.855056 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 20 00:24:46 crc kubenswrapper[5119]: I0220 00:24:46.148082 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 20 00:24:46 crc kubenswrapper[5119]: W0220 00:24:46.158094 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6056c06_aaca_4273_9f3a_f729715e17b7.slice/crio-6aa633fc6d6ef18abb427218899440e5279a7e42f73db2fa8123ef1d6059cbf1 WatchSource:0}: Error finding container 6aa633fc6d6ef18abb427218899440e5279a7e42f73db2fa8123ef1d6059cbf1: Status 404 returned error can't find the container with id 6aa633fc6d6ef18abb427218899440e5279a7e42f73db2fa8123ef1d6059cbf1 Feb 20 00:24:46 crc kubenswrapper[5119]: I0220 00:24:46.307800 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"b6056c06-aaca-4273-9f3a-f729715e17b7","Type":"ContainerStarted","Data":"6aa633fc6d6ef18abb427218899440e5279a7e42f73db2fa8123ef1d6059cbf1"} Feb 20 00:24:47 crc kubenswrapper[5119]: I0220 00:24:47.319764 5119 generic.go:358] "Generic (PLEG): container finished" podID="b6056c06-aaca-4273-9f3a-f729715e17b7" containerID="d8a7e86bb0dc9b8c5f495ee8d8435d87c93eddd5ca10a55202232bf54ca3eb44" exitCode=0 Feb 20 00:24:47 crc kubenswrapper[5119]: I0220 00:24:47.319825 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"b6056c06-aaca-4273-9f3a-f729715e17b7","Type":"ContainerDied","Data":"d8a7e86bb0dc9b8c5f495ee8d8435d87c93eddd5ca10a55202232bf54ca3eb44"} Feb 20 00:24:48 crc kubenswrapper[5119]: I0220 00:24:48.327696 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"b6056c06-aaca-4273-9f3a-f729715e17b7","Type":"ContainerStarted","Data":"2ea342420570be9707cd1f8bd687f73eceaad0c6f11e30a771bf91e496c973d6"} Feb 20 00:24:48 crc kubenswrapper[5119]: I0220 00:24:48.346751 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" podStartSLOduration=2.869626978 podStartE2EDuration="3.346725835s" podCreationTimestamp="2026-02-20 00:24:45 +0000 UTC" firstStartedPulling="2026-02-20 00:24:47.320686962 +0000 UTC m=+869.299651264" lastFinishedPulling="2026-02-20 00:24:47.797785819 +0000 UTC m=+869.776750121" observedRunningTime="2026-02-20 00:24:48.346435088 +0000 UTC m=+870.325399390" watchObservedRunningTime="2026-02-20 00:24:48.346725835 +0000 UTC m=+870.325690117" Feb 20 00:24:49 crc kubenswrapper[5119]: I0220 00:24:49.951891 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t"] Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.317164 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t"] Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.317388 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.437826 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.437904 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67w4m\" (UniqueName: \"kubernetes.io/projected/f6525752-2bea-4994-9012-eb09f30446d9-kube-api-access-67w4m\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.438323 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.540426 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.540529 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.540621 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-67w4m\" (UniqueName: \"kubernetes.io/projected/f6525752-2bea-4994-9012-eb09f30446d9-kube-api-access-67w4m\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.541406 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.541932 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.564923 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb"] Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.622125 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-67w4m\" (UniqueName: \"kubernetes.io/projected/f6525752-2bea-4994-9012-eb09f30446d9-kube-api-access-67w4m\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.653277 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.843883 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb"] Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.844098 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.847156 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.871726 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t"] Feb 20 00:24:50 crc kubenswrapper[5119]: W0220 00:24:50.874313 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6525752_2bea_4994_9012_eb09f30446d9.slice/crio-fa2a46753d47f71d8a9556fe6d18469be2e45f0d5cda5fcc084c8fe63b90f98f WatchSource:0}: Error finding container fa2a46753d47f71d8a9556fe6d18469be2e45f0d5cda5fcc084c8fe63b90f98f: Status 404 returned error can't find the container with id fa2a46753d47f71d8a9556fe6d18469be2e45f0d5cda5fcc084c8fe63b90f98f Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.945947 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.945997 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:50 crc kubenswrapper[5119]: I0220 00:24:50.946062 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsc54\" (UniqueName: \"kubernetes.io/projected/03f522f3-35e9-4534-a728-bc225f746dda-kube-api-access-zsc54\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:51 crc kubenswrapper[5119]: I0220 00:24:51.047276 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zsc54\" (UniqueName: \"kubernetes.io/projected/03f522f3-35e9-4534-a728-bc225f746dda-kube-api-access-zsc54\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:51 crc kubenswrapper[5119]: I0220 00:24:51.047403 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:51 crc kubenswrapper[5119]: I0220 00:24:51.047438 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:51 crc kubenswrapper[5119]: I0220 00:24:51.047926 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:51 crc kubenswrapper[5119]: I0220 00:24:51.048015 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:51 crc kubenswrapper[5119]: I0220 00:24:51.081170 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsc54\" (UniqueName: \"kubernetes.io/projected/03f522f3-35e9-4534-a728-bc225f746dda-kube-api-access-zsc54\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:51 crc kubenswrapper[5119]: I0220 00:24:51.173010 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:51 crc kubenswrapper[5119]: I0220 00:24:51.361281 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" event={"ID":"f6525752-2bea-4994-9012-eb09f30446d9","Type":"ContainerStarted","Data":"fa2a46753d47f71d8a9556fe6d18469be2e45f0d5cda5fcc084c8fe63b90f98f"} Feb 20 00:24:51 crc kubenswrapper[5119]: I0220 00:24:51.460175 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb"] Feb 20 00:24:52 crc kubenswrapper[5119]: I0220 00:24:52.405804 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" event={"ID":"03f522f3-35e9-4534-a728-bc225f746dda","Type":"ContainerStarted","Data":"f0a5908514dea9daccfbd5bc30e0ac5f306ee4246d73448e07c7c1547363f026"} Feb 20 00:24:53 crc kubenswrapper[5119]: I0220 00:24:53.430533 5119 generic.go:358] "Generic (PLEG): container finished" podID="f6525752-2bea-4994-9012-eb09f30446d9" containerID="ec20140bbee48ef148a177c5692ac500e03cd61266b13fbc30b5a40552bf2e62" exitCode=0 Feb 20 00:24:53 crc kubenswrapper[5119]: I0220 00:24:53.430731 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" event={"ID":"f6525752-2bea-4994-9012-eb09f30446d9","Type":"ContainerDied","Data":"ec20140bbee48ef148a177c5692ac500e03cd61266b13fbc30b5a40552bf2e62"} Feb 20 00:24:53 crc kubenswrapper[5119]: I0220 00:24:53.439104 5119 generic.go:358] "Generic (PLEG): container finished" podID="03f522f3-35e9-4534-a728-bc225f746dda" containerID="0c987999c79fa7d760a49578b81d531308d1960dcc0b3609b6633d6b0e9b86df" exitCode=0 Feb 20 00:24:53 crc kubenswrapper[5119]: I0220 00:24:53.439347 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" event={"ID":"03f522f3-35e9-4534-a728-bc225f746dda","Type":"ContainerDied","Data":"0c987999c79fa7d760a49578b81d531308d1960dcc0b3609b6633d6b0e9b86df"} Feb 20 00:24:54 crc kubenswrapper[5119]: I0220 00:24:54.446479 5119 generic.go:358] "Generic (PLEG): container finished" podID="f6525752-2bea-4994-9012-eb09f30446d9" containerID="8da742a15633472ef06994ada59c53f48f451dfa0696fdd027daf7e8133dc066" exitCode=0 Feb 20 00:24:54 crc kubenswrapper[5119]: I0220 00:24:54.446627 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" event={"ID":"f6525752-2bea-4994-9012-eb09f30446d9","Type":"ContainerDied","Data":"8da742a15633472ef06994ada59c53f48f451dfa0696fdd027daf7e8133dc066"} Feb 20 00:24:55 crc kubenswrapper[5119]: I0220 00:24:55.457008 5119 generic.go:358] "Generic (PLEG): container finished" podID="f6525752-2bea-4994-9012-eb09f30446d9" containerID="fb306667a4d62c18bc034bec7e0da60c77480c7e55f63c97cdc033e842c216b0" exitCode=0 Feb 20 00:24:55 crc kubenswrapper[5119]: I0220 00:24:55.457249 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" event={"ID":"f6525752-2bea-4994-9012-eb09f30446d9","Type":"ContainerDied","Data":"fb306667a4d62c18bc034bec7e0da60c77480c7e55f63c97cdc033e842c216b0"} Feb 20 00:24:55 crc kubenswrapper[5119]: I0220 00:24:55.459000 5119 generic.go:358] "Generic (PLEG): container finished" podID="03f522f3-35e9-4534-a728-bc225f746dda" containerID="bce3969bbdd44b2c5857e047cbe296579afd3049841c43abecfde1c88ea17514" exitCode=0 Feb 20 00:24:55 crc kubenswrapper[5119]: I0220 00:24:55.459209 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" event={"ID":"03f522f3-35e9-4534-a728-bc225f746dda","Type":"ContainerDied","Data":"bce3969bbdd44b2c5857e047cbe296579afd3049841c43abecfde1c88ea17514"} Feb 20 00:24:56 crc kubenswrapper[5119]: I0220 00:24:56.470175 5119 generic.go:358] "Generic (PLEG): container finished" podID="03f522f3-35e9-4534-a728-bc225f746dda" containerID="3f88c8973d027e13c4545771153e74c287b146535c98a6c5805e39d97d6d7b78" exitCode=0 Feb 20 00:24:56 crc kubenswrapper[5119]: I0220 00:24:56.470262 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" event={"ID":"03f522f3-35e9-4534-a728-bc225f746dda","Type":"ContainerDied","Data":"3f88c8973d027e13c4545771153e74c287b146535c98a6c5805e39d97d6d7b78"} Feb 20 00:24:56 crc kubenswrapper[5119]: I0220 00:24:56.758741 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:56 crc kubenswrapper[5119]: I0220 00:24:56.936946 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-util\") pod \"f6525752-2bea-4994-9012-eb09f30446d9\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " Feb 20 00:24:56 crc kubenswrapper[5119]: I0220 00:24:56.937008 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67w4m\" (UniqueName: \"kubernetes.io/projected/f6525752-2bea-4994-9012-eb09f30446d9-kube-api-access-67w4m\") pod \"f6525752-2bea-4994-9012-eb09f30446d9\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " Feb 20 00:24:56 crc kubenswrapper[5119]: I0220 00:24:56.937038 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-bundle\") pod \"f6525752-2bea-4994-9012-eb09f30446d9\" (UID: \"f6525752-2bea-4994-9012-eb09f30446d9\") " Feb 20 00:24:56 crc kubenswrapper[5119]: I0220 00:24:56.938475 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-bundle" (OuterVolumeSpecName: "bundle") pod "f6525752-2bea-4994-9012-eb09f30446d9" (UID: "f6525752-2bea-4994-9012-eb09f30446d9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:56 crc kubenswrapper[5119]: I0220 00:24:56.943165 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6525752-2bea-4994-9012-eb09f30446d9-kube-api-access-67w4m" (OuterVolumeSpecName: "kube-api-access-67w4m") pod "f6525752-2bea-4994-9012-eb09f30446d9" (UID: "f6525752-2bea-4994-9012-eb09f30446d9"). InnerVolumeSpecName "kube-api-access-67w4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:24:56 crc kubenswrapper[5119]: I0220 00:24:56.954418 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-util" (OuterVolumeSpecName: "util") pod "f6525752-2bea-4994-9012-eb09f30446d9" (UID: "f6525752-2bea-4994-9012-eb09f30446d9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.038477 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.038511 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-67w4m\" (UniqueName: \"kubernetes.io/projected/f6525752-2bea-4994-9012-eb09f30446d9-kube-api-access-67w4m\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.038524 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6525752-2bea-4994-9012-eb09f30446d9-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.482788 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" event={"ID":"f6525752-2bea-4994-9012-eb09f30446d9","Type":"ContainerDied","Data":"fa2a46753d47f71d8a9556fe6d18469be2e45f0d5cda5fcc084c8fe63b90f98f"} Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.482823 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572j9j9t" Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.482846 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa2a46753d47f71d8a9556fe6d18469be2e45f0d5cda5fcc084c8fe63b90f98f" Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.771266 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.953386 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsc54\" (UniqueName: \"kubernetes.io/projected/03f522f3-35e9-4534-a728-bc225f746dda-kube-api-access-zsc54\") pod \"03f522f3-35e9-4534-a728-bc225f746dda\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.953634 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-util\") pod \"03f522f3-35e9-4534-a728-bc225f746dda\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.953699 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-bundle\") pod \"03f522f3-35e9-4534-a728-bc225f746dda\" (UID: \"03f522f3-35e9-4534-a728-bc225f746dda\") " Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.955053 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-bundle" (OuterVolumeSpecName: "bundle") pod "03f522f3-35e9-4534-a728-bc225f746dda" (UID: "03f522f3-35e9-4534-a728-bc225f746dda"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:57 crc kubenswrapper[5119]: I0220 00:24:57.958471 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f522f3-35e9-4534-a728-bc225f746dda-kube-api-access-zsc54" (OuterVolumeSpecName: "kube-api-access-zsc54") pod "03f522f3-35e9-4534-a728-bc225f746dda" (UID: "03f522f3-35e9-4534-a728-bc225f746dda"). InnerVolumeSpecName "kube-api-access-zsc54". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:24:58 crc kubenswrapper[5119]: I0220 00:24:58.055500 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:58 crc kubenswrapper[5119]: I0220 00:24:58.055565 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsc54\" (UniqueName: \"kubernetes.io/projected/03f522f3-35e9-4534-a728-bc225f746dda-kube-api-access-zsc54\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:58 crc kubenswrapper[5119]: I0220 00:24:58.216191 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-util" (OuterVolumeSpecName: "util") pod "03f522f3-35e9-4534-a728-bc225f746dda" (UID: "03f522f3-35e9-4534-a728-bc225f746dda"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:58 crc kubenswrapper[5119]: I0220 00:24:58.258809 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03f522f3-35e9-4534-a728-bc225f746dda-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:58 crc kubenswrapper[5119]: I0220 00:24:58.498155 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" event={"ID":"03f522f3-35e9-4534-a728-bc225f746dda","Type":"ContainerDied","Data":"f0a5908514dea9daccfbd5bc30e0ac5f306ee4246d73448e07c7c1547363f026"} Feb 20 00:24:58 crc kubenswrapper[5119]: I0220 00:24:58.498514 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0a5908514dea9daccfbd5bc30e0ac5f306ee4246d73448e07c7c1547363f026" Feb 20 00:24:58 crc kubenswrapper[5119]: I0220 00:24:58.498180 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.847885 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-m88kv"] Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.848933 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6525752-2bea-4994-9012-eb09f30446d9" containerName="pull" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.848951 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6525752-2bea-4994-9012-eb09f30446d9" containerName="pull" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.848965 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03f522f3-35e9-4534-a728-bc225f746dda" containerName="pull" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.848991 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f522f3-35e9-4534-a728-bc225f746dda" containerName="pull" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.849001 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03f522f3-35e9-4534-a728-bc225f746dda" containerName="extract" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.849008 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f522f3-35e9-4534-a728-bc225f746dda" containerName="extract" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.849029 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03f522f3-35e9-4534-a728-bc225f746dda" containerName="util" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.849036 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f522f3-35e9-4534-a728-bc225f746dda" containerName="util" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.849056 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6525752-2bea-4994-9012-eb09f30446d9" containerName="util" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.849063 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6525752-2bea-4994-9012-eb09f30446d9" containerName="util" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.849073 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6525752-2bea-4994-9012-eb09f30446d9" containerName="extract" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.849080 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6525752-2bea-4994-9012-eb09f30446d9" containerName="extract" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.849188 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="03f522f3-35e9-4534-a728-bc225f746dda" containerName="extract" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.849199 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="f6525752-2bea-4994-9012-eb09f30446d9" containerName="extract" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.854375 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-m88kv" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.856334 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-7xmj5\"" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.863953 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-m88kv"] Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.920798 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdhwf\" (UniqueName: \"kubernetes.io/projected/206e4ff6-6191-4c11-94cf-765aa3158c2d-kube-api-access-bdhwf\") pod \"service-telemetry-operator-794b5697c7-m88kv\" (UID: \"206e4ff6-6191-4c11-94cf-765aa3158c2d\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-m88kv" Feb 20 00:25:02 crc kubenswrapper[5119]: I0220 00:25:02.920896 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/206e4ff6-6191-4c11-94cf-765aa3158c2d-runner\") pod \"service-telemetry-operator-794b5697c7-m88kv\" (UID: \"206e4ff6-6191-4c11-94cf-765aa3158c2d\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-m88kv" Feb 20 00:25:03 crc kubenswrapper[5119]: I0220 00:25:03.021932 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bdhwf\" (UniqueName: \"kubernetes.io/projected/206e4ff6-6191-4c11-94cf-765aa3158c2d-kube-api-access-bdhwf\") pod \"service-telemetry-operator-794b5697c7-m88kv\" (UID: \"206e4ff6-6191-4c11-94cf-765aa3158c2d\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-m88kv" Feb 20 00:25:03 crc kubenswrapper[5119]: I0220 00:25:03.021992 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/206e4ff6-6191-4c11-94cf-765aa3158c2d-runner\") pod \"service-telemetry-operator-794b5697c7-m88kv\" (UID: \"206e4ff6-6191-4c11-94cf-765aa3158c2d\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-m88kv" Feb 20 00:25:03 crc kubenswrapper[5119]: I0220 00:25:03.022464 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/206e4ff6-6191-4c11-94cf-765aa3158c2d-runner\") pod \"service-telemetry-operator-794b5697c7-m88kv\" (UID: \"206e4ff6-6191-4c11-94cf-765aa3158c2d\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-m88kv" Feb 20 00:25:03 crc kubenswrapper[5119]: I0220 00:25:03.043157 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdhwf\" (UniqueName: \"kubernetes.io/projected/206e4ff6-6191-4c11-94cf-765aa3158c2d-kube-api-access-bdhwf\") pod \"service-telemetry-operator-794b5697c7-m88kv\" (UID: \"206e4ff6-6191-4c11-94cf-765aa3158c2d\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-m88kv" Feb 20 00:25:03 crc kubenswrapper[5119]: I0220 00:25:03.172075 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-m88kv" Feb 20 00:25:03 crc kubenswrapper[5119]: I0220 00:25:03.414709 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-m88kv"] Feb 20 00:25:03 crc kubenswrapper[5119]: I0220 00:25:03.558375 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-m88kv" event={"ID":"206e4ff6-6191-4c11-94cf-765aa3158c2d","Type":"ContainerStarted","Data":"d326ecfe358a9b9fdb525c3fd2b467de78fe0a8ec1e9e159dc34f1fded8cb9ae"} Feb 20 00:25:04 crc kubenswrapper[5119]: I0220 00:25:04.200566 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-ld9zc"] Feb 20 00:25:04 crc kubenswrapper[5119]: I0220 00:25:04.210292 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-ld9zc" Feb 20 00:25:04 crc kubenswrapper[5119]: I0220 00:25:04.211678 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-ld9zc"] Feb 20 00:25:04 crc kubenswrapper[5119]: I0220 00:25:04.215347 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-2tzkw\"" Feb 20 00:25:04 crc kubenswrapper[5119]: I0220 00:25:04.239280 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkvnp\" (UniqueName: \"kubernetes.io/projected/6c24a2d9-40b5-422b-9fbd-9604adaa1a42-kube-api-access-hkvnp\") pod \"interconnect-operator-78b9bd8798-ld9zc\" (UID: \"6c24a2d9-40b5-422b-9fbd-9604adaa1a42\") " pod="service-telemetry/interconnect-operator-78b9bd8798-ld9zc" Feb 20 00:25:04 crc kubenswrapper[5119]: I0220 00:25:04.340984 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hkvnp\" (UniqueName: \"kubernetes.io/projected/6c24a2d9-40b5-422b-9fbd-9604adaa1a42-kube-api-access-hkvnp\") pod \"interconnect-operator-78b9bd8798-ld9zc\" (UID: \"6c24a2d9-40b5-422b-9fbd-9604adaa1a42\") " pod="service-telemetry/interconnect-operator-78b9bd8798-ld9zc" Feb 20 00:25:04 crc kubenswrapper[5119]: I0220 00:25:04.381278 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkvnp\" (UniqueName: \"kubernetes.io/projected/6c24a2d9-40b5-422b-9fbd-9604adaa1a42-kube-api-access-hkvnp\") pod \"interconnect-operator-78b9bd8798-ld9zc\" (UID: \"6c24a2d9-40b5-422b-9fbd-9604adaa1a42\") " pod="service-telemetry/interconnect-operator-78b9bd8798-ld9zc" Feb 20 00:25:04 crc kubenswrapper[5119]: I0220 00:25:04.524338 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-ld9zc" Feb 20 00:25:04 crc kubenswrapper[5119]: I0220 00:25:04.713688 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-ld9zc"] Feb 20 00:25:05 crc kubenswrapper[5119]: I0220 00:25:05.597130 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-ld9zc" event={"ID":"6c24a2d9-40b5-422b-9fbd-9604adaa1a42","Type":"ContainerStarted","Data":"42589eaaedcc8ece50427fdb5aecb5bf7561bc27f70112e089281253e1efd5f8"} Feb 20 00:25:10 crc kubenswrapper[5119]: I0220 00:25:10.643425 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-m88kv" event={"ID":"206e4ff6-6191-4c11-94cf-765aa3158c2d","Type":"ContainerStarted","Data":"f770785371187aed27243bf77ae38e169aaf74b89130240940d88d4348caf1ef"} Feb 20 00:25:10 crc kubenswrapper[5119]: I0220 00:25:10.672386 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-794b5697c7-m88kv" podStartSLOduration=1.9974142480000001 podStartE2EDuration="8.672350678s" podCreationTimestamp="2026-02-20 00:25:02 +0000 UTC" firstStartedPulling="2026-02-20 00:25:03.420871856 +0000 UTC m=+885.399836168" lastFinishedPulling="2026-02-20 00:25:10.095808306 +0000 UTC m=+892.074772598" observedRunningTime="2026-02-20 00:25:10.664462025 +0000 UTC m=+892.643426357" watchObservedRunningTime="2026-02-20 00:25:10.672350678 +0000 UTC m=+892.651315010" Feb 20 00:25:16 crc kubenswrapper[5119]: I0220 00:25:16.685658 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-ld9zc" event={"ID":"6c24a2d9-40b5-422b-9fbd-9604adaa1a42","Type":"ContainerStarted","Data":"3963348269b455be7479da540a961690fd8c4f31bccf909cb2b1ba2c2b225280"} Feb 20 00:25:16 crc kubenswrapper[5119]: I0220 00:25:16.708519 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-ld9zc" podStartSLOduration=1.193649189 podStartE2EDuration="12.708494414s" podCreationTimestamp="2026-02-20 00:25:04 +0000 UTC" firstStartedPulling="2026-02-20 00:25:04.724359963 +0000 UTC m=+886.703324285" lastFinishedPulling="2026-02-20 00:25:16.239205188 +0000 UTC m=+898.218169510" observedRunningTime="2026-02-20 00:25:16.701920756 +0000 UTC m=+898.680885088" watchObservedRunningTime="2026-02-20 00:25:16.708494414 +0000 UTC m=+898.687458736" Feb 20 00:25:19 crc kubenswrapper[5119]: I0220 00:25:19.302356 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlzxr_e24ea4b0-1a34-4fb3-b40c-684c03795e07/kube-multus/0.log" Feb 20 00:25:19 crc kubenswrapper[5119]: I0220 00:25:19.307584 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlzxr_e24ea4b0-1a34-4fb3-b40c-684c03795e07/kube-multus/0.log" Feb 20 00:25:19 crc kubenswrapper[5119]: I0220 00:25:19.311508 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:25:19 crc kubenswrapper[5119]: I0220 00:25:19.315858 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.789808 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-99glm"] Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.827197 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-99glm"] Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.827362 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.830209 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.831059 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.831520 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.831893 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.832038 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-kpcvr\"" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.832240 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.834741 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.987881 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.988241 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-users\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.988276 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.988330 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nrkp\" (UniqueName: \"kubernetes.io/projected/61678311-ca0e-4066-a8ff-d8e438fc7108-kube-api-access-6nrkp\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.988581 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.988608 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:30 crc kubenswrapper[5119]: I0220 00:25:30.988631 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-config\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.090492 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.090603 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.090660 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-config\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.090722 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.090770 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-users\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.090829 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.090889 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6nrkp\" (UniqueName: \"kubernetes.io/projected/61678311-ca0e-4066-a8ff-d8e438fc7108-kube-api-access-6nrkp\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.092314 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-config\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.097790 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.098180 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.098319 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.098756 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-users\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.103528 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.121773 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nrkp\" (UniqueName: \"kubernetes.io/projected/61678311-ca0e-4066-a8ff-d8e438fc7108-kube-api-access-6nrkp\") pod \"default-interconnect-55bf8d5cb-99glm\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.154040 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.380617 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-99glm"] Feb 20 00:25:31 crc kubenswrapper[5119]: I0220 00:25:31.802867 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" event={"ID":"61678311-ca0e-4066-a8ff-d8e438fc7108","Type":"ContainerStarted","Data":"b6aadc5455bf2f15ca8142a5fdcde1c4b8e73ad7d3c086215da20103777d1162"} Feb 20 00:25:38 crc kubenswrapper[5119]: I0220 00:25:38.870871 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" event={"ID":"61678311-ca0e-4066-a8ff-d8e438fc7108","Type":"ContainerStarted","Data":"19373faf1b5ed81c275df47dbb55ad0167a2ab409f5db85c3034a9faa7695cd3"} Feb 20 00:25:38 crc kubenswrapper[5119]: I0220 00:25:38.901075 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" podStartSLOduration=1.903657744 podStartE2EDuration="8.901057194s" podCreationTimestamp="2026-02-20 00:25:30 +0000 UTC" firstStartedPulling="2026-02-20 00:25:31.383749592 +0000 UTC m=+913.362713894" lastFinishedPulling="2026-02-20 00:25:38.381149022 +0000 UTC m=+920.360113344" observedRunningTime="2026-02-20 00:25:38.897974201 +0000 UTC m=+920.876938533" watchObservedRunningTime="2026-02-20 00:25:38.901057194 +0000 UTC m=+920.880021476" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.163773 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.174061 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.176566 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.177129 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.177429 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.178135 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.178162 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.178260 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-jr8s6\"" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.178136 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.179265 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.179483 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.179700 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.186672 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.246950 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0202d41c-dd80-4473-9175-855d12a13230-tls-assets\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.247037 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-web-config\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.247131 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a61a7f32-6aa5-4754-bee6-1a7d627fea0f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61a7f32-6aa5-4754-bee6-1a7d627fea0f\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.247186 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.247265 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0202d41c-dd80-4473-9175-855d12a13230-config-out\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.247410 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.247503 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.247639 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.247710 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.247775 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg7zt\" (UniqueName: \"kubernetes.io/projected/0202d41c-dd80-4473-9175-855d12a13230-kube-api-access-pg7zt\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.247922 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-config\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.247963 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.349775 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0202d41c-dd80-4473-9175-855d12a13230-tls-assets\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.349855 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-web-config\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.349938 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-a61a7f32-6aa5-4754-bee6-1a7d627fea0f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61a7f32-6aa5-4754-bee6-1a7d627fea0f\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.349979 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.350039 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0202d41c-dd80-4473-9175-855d12a13230-config-out\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.350075 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.350117 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.350150 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.350189 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.350244 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pg7zt\" (UniqueName: \"kubernetes.io/projected/0202d41c-dd80-4473-9175-855d12a13230-kube-api-access-pg7zt\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.350340 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-config\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.350375 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: E0220 00:25:41.351327 5119 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 20 00:25:41 crc kubenswrapper[5119]: E0220 00:25:41.351503 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-prometheus-proxy-tls podName:0202d41c-dd80-4473-9175-855d12a13230 nodeName:}" failed. No retries permitted until 2026-02-20 00:25:41.851460675 +0000 UTC m=+923.830425017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "0202d41c-dd80-4473-9175-855d12a13230") : secret "default-prometheus-proxy-tls" not found Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.351780 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.351870 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.352310 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.353771 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0202d41c-dd80-4473-9175-855d12a13230-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.355035 5119 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.355081 5119 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-a61a7f32-6aa5-4754-bee6-1a7d627fea0f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61a7f32-6aa5-4754-bee6-1a7d627fea0f\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d7e6897ba8cb41118da086897f3352aba821ae010d8caab79339a124fd67395c/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.360054 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-web-config\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.360696 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.361238 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-config\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.364564 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0202d41c-dd80-4473-9175-855d12a13230-tls-assets\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.383190 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0202d41c-dd80-4473-9175-855d12a13230-config-out\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.392328 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-a61a7f32-6aa5-4754-bee6-1a7d627fea0f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61a7f32-6aa5-4754-bee6-1a7d627fea0f\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.395034 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg7zt\" (UniqueName: \"kubernetes.io/projected/0202d41c-dd80-4473-9175-855d12a13230-kube-api-access-pg7zt\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: I0220 00:25:41.857359 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:41 crc kubenswrapper[5119]: E0220 00:25:41.857588 5119 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 20 00:25:41 crc kubenswrapper[5119]: E0220 00:25:41.858088 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-prometheus-proxy-tls podName:0202d41c-dd80-4473-9175-855d12a13230 nodeName:}" failed. No retries permitted until 2026-02-20 00:25:42.858058628 +0000 UTC m=+924.837022960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "0202d41c-dd80-4473-9175-855d12a13230") : secret "default-prometheus-proxy-tls" not found Feb 20 00:25:42 crc kubenswrapper[5119]: I0220 00:25:42.160345 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:25:42 crc kubenswrapper[5119]: I0220 00:25:42.160473 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:25:42 crc kubenswrapper[5119]: I0220 00:25:42.886432 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:42 crc kubenswrapper[5119]: I0220 00:25:42.894229 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0202d41c-dd80-4473-9175-855d12a13230-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0202d41c-dd80-4473-9175-855d12a13230\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:25:43 crc kubenswrapper[5119]: I0220 00:25:43.006508 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 20 00:25:43 crc kubenswrapper[5119]: I0220 00:25:43.500748 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 20 00:25:43 crc kubenswrapper[5119]: I0220 00:25:43.508972 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 00:25:43 crc kubenswrapper[5119]: I0220 00:25:43.910436 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0202d41c-dd80-4473-9175-855d12a13230","Type":"ContainerStarted","Data":"e186a50b8b407c04bd91f0bc3faa39b6f35bc1014949731518d642f388974e16"} Feb 20 00:25:47 crc kubenswrapper[5119]: I0220 00:25:47.942072 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0202d41c-dd80-4473-9175-855d12a13230","Type":"ContainerStarted","Data":"25e98c486aa849809b55ae725e79bb7d8d156ec1711d5680f99051373918439e"} Feb 20 00:25:51 crc kubenswrapper[5119]: I0220 00:25:51.327382 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-jc2zw"] Feb 20 00:25:51 crc kubenswrapper[5119]: I0220 00:25:51.345691 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-jc2zw"] Feb 20 00:25:51 crc kubenswrapper[5119]: I0220 00:25:51.345818 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-jc2zw" Feb 20 00:25:51 crc kubenswrapper[5119]: I0220 00:25:51.514713 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pndtl\" (UniqueName: \"kubernetes.io/projected/4e23a4f9-0ec1-4b2f-b886-0bced8fe7442-kube-api-access-pndtl\") pod \"default-snmp-webhook-6774d8dfbc-jc2zw\" (UID: \"4e23a4f9-0ec1-4b2f-b886-0bced8fe7442\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-jc2zw" Feb 20 00:25:51 crc kubenswrapper[5119]: I0220 00:25:51.616780 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pndtl\" (UniqueName: \"kubernetes.io/projected/4e23a4f9-0ec1-4b2f-b886-0bced8fe7442-kube-api-access-pndtl\") pod \"default-snmp-webhook-6774d8dfbc-jc2zw\" (UID: \"4e23a4f9-0ec1-4b2f-b886-0bced8fe7442\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-jc2zw" Feb 20 00:25:51 crc kubenswrapper[5119]: I0220 00:25:51.649016 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pndtl\" (UniqueName: \"kubernetes.io/projected/4e23a4f9-0ec1-4b2f-b886-0bced8fe7442-kube-api-access-pndtl\") pod \"default-snmp-webhook-6774d8dfbc-jc2zw\" (UID: \"4e23a4f9-0ec1-4b2f-b886-0bced8fe7442\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-jc2zw" Feb 20 00:25:51 crc kubenswrapper[5119]: I0220 00:25:51.674011 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-jc2zw" Feb 20 00:25:52 crc kubenswrapper[5119]: I0220 00:25:52.130902 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-jc2zw"] Feb 20 00:25:52 crc kubenswrapper[5119]: I0220 00:25:52.987432 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-jc2zw" event={"ID":"4e23a4f9-0ec1-4b2f-b886-0bced8fe7442","Type":"ContainerStarted","Data":"40ab256b2d5ab1a4d2d6fdabbac584703a688158b1ecbb5f9b712a44c915d2e5"} Feb 20 00:25:54 crc kubenswrapper[5119]: I0220 00:25:54.893047 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 20 00:25:54 crc kubenswrapper[5119]: I0220 00:25:54.967400 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 20 00:25:54 crc kubenswrapper[5119]: I0220 00:25:54.967592 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:54 crc kubenswrapper[5119]: I0220 00:25:54.969914 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Feb 20 00:25:54 crc kubenswrapper[5119]: I0220 00:25:54.970813 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Feb 20 00:25:54 crc kubenswrapper[5119]: I0220 00:25:54.970845 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Feb 20 00:25:54 crc kubenswrapper[5119]: I0220 00:25:54.971608 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-8626k\"" Feb 20 00:25:54 crc kubenswrapper[5119]: I0220 00:25:54.972226 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Feb 20 00:25:54 crc kubenswrapper[5119]: I0220 00:25:54.975827 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.068969 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrn4\" (UniqueName: \"kubernetes.io/projected/96709d8a-7441-425a-b115-9d8d56a0c603-kube-api-access-hsrn4\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.069022 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96709d8a-7441-425a-b115-9d8d56a0c603-tls-assets\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.069121 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.069155 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-web-config\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.069188 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1c6c077d-0a41-40e4-a61f-5df16d5be7c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c6c077d-0a41-40e4-a61f-5df16d5be7c3\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.069221 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96709d8a-7441-425a-b115-9d8d56a0c603-config-out\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.069263 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-config-volume\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.069283 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.069315 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.170753 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-config-volume\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.170811 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.170843 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.170921 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hsrn4\" (UniqueName: \"kubernetes.io/projected/96709d8a-7441-425a-b115-9d8d56a0c603-kube-api-access-hsrn4\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.170947 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96709d8a-7441-425a-b115-9d8d56a0c603-tls-assets\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.171003 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: E0220 00:25:55.171018 5119 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 20 00:25:55 crc kubenswrapper[5119]: E0220 00:25:55.171099 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls podName:96709d8a-7441-425a-b115-9d8d56a0c603 nodeName:}" failed. No retries permitted until 2026-02-20 00:25:55.671080094 +0000 UTC m=+937.650044386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "96709d8a-7441-425a-b115-9d8d56a0c603") : secret "default-alertmanager-proxy-tls" not found Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.171173 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-web-config\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.171245 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-1c6c077d-0a41-40e4-a61f-5df16d5be7c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c6c077d-0a41-40e4-a61f-5df16d5be7c3\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.171308 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96709d8a-7441-425a-b115-9d8d56a0c603-config-out\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.175641 5119 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.175682 5119 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-1c6c077d-0a41-40e4-a61f-5df16d5be7c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c6c077d-0a41-40e4-a61f-5df16d5be7c3\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c51afc61aba2be27a7f715a2162a4a4fb3cf4cc0db285fc2d3596c42ccc8f35/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.177789 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-web-config\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.178534 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.179212 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96709d8a-7441-425a-b115-9d8d56a0c603-tls-assets\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.180487 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-config-volume\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.181128 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96709d8a-7441-425a-b115-9d8d56a0c603-config-out\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.181717 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.191374 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsrn4\" (UniqueName: \"kubernetes.io/projected/96709d8a-7441-425a-b115-9d8d56a0c603-kube-api-access-hsrn4\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.302903 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-1c6c077d-0a41-40e4-a61f-5df16d5be7c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c6c077d-0a41-40e4-a61f-5df16d5be7c3\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: I0220 00:25:55.678238 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:55 crc kubenswrapper[5119]: E0220 00:25:55.678452 5119 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 20 00:25:55 crc kubenswrapper[5119]: E0220 00:25:55.678563 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls podName:96709d8a-7441-425a-b115-9d8d56a0c603 nodeName:}" failed. No retries permitted until 2026-02-20 00:25:56.67852789 +0000 UTC m=+938.657492182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "96709d8a-7441-425a-b115-9d8d56a0c603") : secret "default-alertmanager-proxy-tls" not found Feb 20 00:25:56 crc kubenswrapper[5119]: I0220 00:25:56.012155 5119 generic.go:358] "Generic (PLEG): container finished" podID="0202d41c-dd80-4473-9175-855d12a13230" containerID="25e98c486aa849809b55ae725e79bb7d8d156ec1711d5680f99051373918439e" exitCode=0 Feb 20 00:25:56 crc kubenswrapper[5119]: I0220 00:25:56.012338 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0202d41c-dd80-4473-9175-855d12a13230","Type":"ContainerDied","Data":"25e98c486aa849809b55ae725e79bb7d8d156ec1711d5680f99051373918439e"} Feb 20 00:25:56 crc kubenswrapper[5119]: I0220 00:25:56.692349 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:56 crc kubenswrapper[5119]: E0220 00:25:56.692607 5119 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 20 00:25:56 crc kubenswrapper[5119]: E0220 00:25:56.692810 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls podName:96709d8a-7441-425a-b115-9d8d56a0c603 nodeName:}" failed. No retries permitted until 2026-02-20 00:25:58.692785924 +0000 UTC m=+940.671750226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "96709d8a-7441-425a-b115-9d8d56a0c603") : secret "default-alertmanager-proxy-tls" not found Feb 20 00:25:58 crc kubenswrapper[5119]: I0220 00:25:58.721143 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:58 crc kubenswrapper[5119]: I0220 00:25:58.727134 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/96709d8a-7441-425a-b115-9d8d56a0c603-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"96709d8a-7441-425a-b115-9d8d56a0c603\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:58 crc kubenswrapper[5119]: I0220 00:25:58.892288 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 20 00:25:59 crc kubenswrapper[5119]: I0220 00:25:59.827620 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 20 00:25:59 crc kubenswrapper[5119]: W0220 00:25:59.839257 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96709d8a_7441_425a_b115_9d8d56a0c603.slice/crio-7cf9f5f615ca77e444789094860448d18831695d973ea09f222fbb580b646be3 WatchSource:0}: Error finding container 7cf9f5f615ca77e444789094860448d18831695d973ea09f222fbb580b646be3: Status 404 returned error can't find the container with id 7cf9f5f615ca77e444789094860448d18831695d973ea09f222fbb580b646be3 Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.041689 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-jc2zw" event={"ID":"4e23a4f9-0ec1-4b2f-b886-0bced8fe7442","Type":"ContainerStarted","Data":"e7e024339479b55cbea2589ccf7a7f76488eb89eb72c58fb5f00e13495439b70"} Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.045241 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"96709d8a-7441-425a-b115-9d8d56a0c603","Type":"ContainerStarted","Data":"7cf9f5f615ca77e444789094860448d18831695d973ea09f222fbb580b646be3"} Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.061529 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-jc2zw" podStartSLOduration=1.484117147 podStartE2EDuration="9.061492127s" podCreationTimestamp="2026-02-20 00:25:51 +0000 UTC" firstStartedPulling="2026-02-20 00:25:52.131616321 +0000 UTC m=+934.110580653" lastFinishedPulling="2026-02-20 00:25:59.708991341 +0000 UTC m=+941.687955633" observedRunningTime="2026-02-20 00:26:00.05342457 +0000 UTC m=+942.032388862" watchObservedRunningTime="2026-02-20 00:26:00.061492127 +0000 UTC m=+942.040456409" Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.133601 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525786-wsdlp"] Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.140124 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525786-wsdlp" Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.143660 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525786-wsdlp"] Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.145065 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.145236 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.145534 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-nmc85\"" Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.240090 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znfxd\" (UniqueName: \"kubernetes.io/projected/215f2411-f8a7-4a86-a987-c1555486f58b-kube-api-access-znfxd\") pod \"auto-csr-approver-29525786-wsdlp\" (UID: \"215f2411-f8a7-4a86-a987-c1555486f58b\") " pod="openshift-infra/auto-csr-approver-29525786-wsdlp" Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.341942 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-znfxd\" (UniqueName: \"kubernetes.io/projected/215f2411-f8a7-4a86-a987-c1555486f58b-kube-api-access-znfxd\") pod \"auto-csr-approver-29525786-wsdlp\" (UID: \"215f2411-f8a7-4a86-a987-c1555486f58b\") " pod="openshift-infra/auto-csr-approver-29525786-wsdlp" Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.369004 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-znfxd\" (UniqueName: \"kubernetes.io/projected/215f2411-f8a7-4a86-a987-c1555486f58b-kube-api-access-znfxd\") pod \"auto-csr-approver-29525786-wsdlp\" (UID: \"215f2411-f8a7-4a86-a987-c1555486f58b\") " pod="openshift-infra/auto-csr-approver-29525786-wsdlp" Feb 20 00:26:00 crc kubenswrapper[5119]: I0220 00:26:00.466711 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525786-wsdlp" Feb 20 00:26:03 crc kubenswrapper[5119]: I0220 00:26:03.017649 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525786-wsdlp"] Feb 20 00:26:03 crc kubenswrapper[5119]: W0220 00:26:03.017962 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod215f2411_f8a7_4a86_a987_c1555486f58b.slice/crio-3ffb61a99bf2975aad10d6c2bb6e796e8a122b1baff04c278b92c57954e064fd WatchSource:0}: Error finding container 3ffb61a99bf2975aad10d6c2bb6e796e8a122b1baff04c278b92c57954e064fd: Status 404 returned error can't find the container with id 3ffb61a99bf2975aad10d6c2bb6e796e8a122b1baff04c278b92c57954e064fd Feb 20 00:26:03 crc kubenswrapper[5119]: I0220 00:26:03.068090 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525786-wsdlp" event={"ID":"215f2411-f8a7-4a86-a987-c1555486f58b","Type":"ContainerStarted","Data":"3ffb61a99bf2975aad10d6c2bb6e796e8a122b1baff04c278b92c57954e064fd"} Feb 20 00:26:03 crc kubenswrapper[5119]: I0220 00:26:03.069467 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0202d41c-dd80-4473-9175-855d12a13230","Type":"ContainerStarted","Data":"b5e536fb8063980f8d0a03fb468d4de34e7fe783ebf2cab7047744b7860e12ef"} Feb 20 00:26:05 crc kubenswrapper[5119]: I0220 00:26:05.084085 5119 generic.go:358] "Generic (PLEG): container finished" podID="215f2411-f8a7-4a86-a987-c1555486f58b" containerID="51f5ce14bd2f457fab5a67d80ef3dcb29306674b34d3db7c456128020a2b03e5" exitCode=0 Feb 20 00:26:05 crc kubenswrapper[5119]: I0220 00:26:05.085110 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525786-wsdlp" event={"ID":"215f2411-f8a7-4a86-a987-c1555486f58b","Type":"ContainerDied","Data":"51f5ce14bd2f457fab5a67d80ef3dcb29306674b34d3db7c456128020a2b03e5"} Feb 20 00:26:05 crc kubenswrapper[5119]: I0220 00:26:05.091075 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"96709d8a-7441-425a-b115-9d8d56a0c603","Type":"ContainerStarted","Data":"6bee6919f71f9a6574e57a73eb79e35c1bc6161c707db17d978944f813d87f2f"} Feb 20 00:26:06 crc kubenswrapper[5119]: I0220 00:26:06.100240 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0202d41c-dd80-4473-9175-855d12a13230","Type":"ContainerStarted","Data":"0da1f686ab9c93905d391490f9c635ed226cf8629d2a2388bd067caeadaf7602"} Feb 20 00:26:06 crc kubenswrapper[5119]: I0220 00:26:06.396431 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525786-wsdlp" Feb 20 00:26:06 crc kubenswrapper[5119]: I0220 00:26:06.435395 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znfxd\" (UniqueName: \"kubernetes.io/projected/215f2411-f8a7-4a86-a987-c1555486f58b-kube-api-access-znfxd\") pod \"215f2411-f8a7-4a86-a987-c1555486f58b\" (UID: \"215f2411-f8a7-4a86-a987-c1555486f58b\") " Feb 20 00:26:06 crc kubenswrapper[5119]: I0220 00:26:06.450730 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215f2411-f8a7-4a86-a987-c1555486f58b-kube-api-access-znfxd" (OuterVolumeSpecName: "kube-api-access-znfxd") pod "215f2411-f8a7-4a86-a987-c1555486f58b" (UID: "215f2411-f8a7-4a86-a987-c1555486f58b"). InnerVolumeSpecName "kube-api-access-znfxd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:26:06 crc kubenswrapper[5119]: I0220 00:26:06.537526 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-znfxd\" (UniqueName: \"kubernetes.io/projected/215f2411-f8a7-4a86-a987-c1555486f58b-kube-api-access-znfxd\") on node \"crc\" DevicePath \"\"" Feb 20 00:26:07 crc kubenswrapper[5119]: I0220 00:26:07.115032 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525786-wsdlp" event={"ID":"215f2411-f8a7-4a86-a987-c1555486f58b","Type":"ContainerDied","Data":"3ffb61a99bf2975aad10d6c2bb6e796e8a122b1baff04c278b92c57954e064fd"} Feb 20 00:26:07 crc kubenswrapper[5119]: I0220 00:26:07.115199 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525786-wsdlp" Feb 20 00:26:07 crc kubenswrapper[5119]: I0220 00:26:07.115209 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ffb61a99bf2975aad10d6c2bb6e796e8a122b1baff04c278b92c57954e064fd" Feb 20 00:26:07 crc kubenswrapper[5119]: I0220 00:26:07.472309 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525780-c78v7"] Feb 20 00:26:07 crc kubenswrapper[5119]: I0220 00:26:07.476528 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525780-c78v7"] Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.335874 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg"] Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.336756 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="215f2411-f8a7-4a86-a987-c1555486f58b" containerName="oc" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.336768 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="215f2411-f8a7-4a86-a987-c1555486f58b" containerName="oc" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.336889 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="215f2411-f8a7-4a86-a987-c1555486f58b" containerName="oc" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.848370 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg"] Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.848558 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.850743 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.850760 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-r4vft\"" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.850877 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.851081 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.871840 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prs96\" (UniqueName: \"kubernetes.io/projected/80a12472-2909-402e-8aed-167b8ddc8adf-kube-api-access-prs96\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.871916 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/80a12472-2909-402e-8aed-167b8ddc8adf-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.871998 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/80a12472-2909-402e-8aed-167b8ddc8adf-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.873652 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.873701 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.889216 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb94ffb-0736-4317-979f-e6be8f6cf7d9" path="/var/lib/kubelet/pods/5bb94ffb-0736-4317-979f-e6be8f6cf7d9/volumes" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.975158 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.975215 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.975306 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-prs96\" (UniqueName: \"kubernetes.io/projected/80a12472-2909-402e-8aed-167b8ddc8adf-kube-api-access-prs96\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.975358 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/80a12472-2909-402e-8aed-167b8ddc8adf-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.975434 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/80a12472-2909-402e-8aed-167b8ddc8adf-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.976034 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/80a12472-2909-402e-8aed-167b8ddc8adf-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: E0220 00:26:08.976484 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 20 00:26:08 crc kubenswrapper[5119]: E0220 00:26:08.976609 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-default-cloud1-coll-meter-proxy-tls podName:80a12472-2909-402e-8aed-167b8ddc8adf nodeName:}" failed. No retries permitted until 2026-02-20 00:26:09.476583473 +0000 UTC m=+951.455547845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" (UID: "80a12472-2909-402e-8aed-167b8ddc8adf") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.976754 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/80a12472-2909-402e-8aed-167b8ddc8adf-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.983474 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:08 crc kubenswrapper[5119]: I0220 00:26:08.993220 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-prs96\" (UniqueName: \"kubernetes.io/projected/80a12472-2909-402e-8aed-167b8ddc8adf-kube-api-access-prs96\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:09 crc kubenswrapper[5119]: I0220 00:26:09.481873 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:09 crc kubenswrapper[5119]: E0220 00:26:09.482011 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 20 00:26:09 crc kubenswrapper[5119]: E0220 00:26:09.482791 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-default-cloud1-coll-meter-proxy-tls podName:80a12472-2909-402e-8aed-167b8ddc8adf nodeName:}" failed. No retries permitted until 2026-02-20 00:26:10.482770355 +0000 UTC m=+952.461734647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" (UID: "80a12472-2909-402e-8aed-167b8ddc8adf") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.145169 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5"] Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.156403 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.159510 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.159562 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.166565 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5"] Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.197857 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.197926 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.197956 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.197983 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b725p\" (UniqueName: \"kubernetes.io/projected/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-kube-api-access-b725p\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.198020 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.299736 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.299840 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b725p\" (UniqueName: \"kubernetes.io/projected/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-kube-api-access-b725p\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.299912 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.300027 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.300090 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: E0220 00:26:10.300331 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 20 00:26:10 crc kubenswrapper[5119]: E0220 00:26:10.300429 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-default-cloud1-ceil-meter-proxy-tls podName:b2966a3b-fe46-4c89-a42a-4efe55c81cf4 nodeName:}" failed. No retries permitted until 2026-02-20 00:26:10.800410588 +0000 UTC m=+952.779374880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" (UID: "b2966a3b-fe46-4c89-a42a-4efe55c81cf4") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.300883 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.301438 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.305752 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.320104 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b725p\" (UniqueName: \"kubernetes.io/projected/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-kube-api-access-b725p\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.504085 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.507462 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/80a12472-2909-402e-8aed-167b8ddc8adf-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-gv8rg\" (UID: \"80a12472-2909-402e-8aed-167b8ddc8adf\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.686873 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" Feb 20 00:26:10 crc kubenswrapper[5119]: I0220 00:26:10.807707 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:10 crc kubenswrapper[5119]: E0220 00:26:10.807873 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 20 00:26:10 crc kubenswrapper[5119]: E0220 00:26:10.808225 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-default-cloud1-ceil-meter-proxy-tls podName:b2966a3b-fe46-4c89-a42a-4efe55c81cf4 nodeName:}" failed. No retries permitted until 2026-02-20 00:26:11.808196422 +0000 UTC m=+953.787160794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" (UID: "b2966a3b-fe46-4c89-a42a-4efe55c81cf4") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 20 00:26:11 crc kubenswrapper[5119]: I0220 00:26:11.821136 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:11 crc kubenswrapper[5119]: I0220 00:26:11.826086 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2966a3b-fe46-4c89-a42a-4efe55c81cf4-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5\" (UID: \"b2966a3b-fe46-4c89-a42a-4efe55c81cf4\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:12 crc kubenswrapper[5119]: I0220 00:26:12.025382 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" Feb 20 00:26:12 crc kubenswrapper[5119]: I0220 00:26:12.163602 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:26:12 crc kubenswrapper[5119]: I0220 00:26:12.163666 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:26:12 crc kubenswrapper[5119]: I0220 00:26:12.173453 5119 generic.go:358] "Generic (PLEG): container finished" podID="96709d8a-7441-425a-b115-9d8d56a0c603" containerID="6bee6919f71f9a6574e57a73eb79e35c1bc6161c707db17d978944f813d87f2f" exitCode=0 Feb 20 00:26:12 crc kubenswrapper[5119]: I0220 00:26:12.173484 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"96709d8a-7441-425a-b115-9d8d56a0c603","Type":"ContainerDied","Data":"6bee6919f71f9a6574e57a73eb79e35c1bc6161c707db17d978944f813d87f2f"} Feb 20 00:26:12 crc kubenswrapper[5119]: W0220 00:26:12.335857 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80a12472_2909_402e_8aed_167b8ddc8adf.slice/crio-51616be9fe201c56688cdd274677747bd2c71a79c628d3c0e5d3062d80458577 WatchSource:0}: Error finding container 51616be9fe201c56688cdd274677747bd2c71a79c628d3c0e5d3062d80458577: Status 404 returned error can't find the container with id 51616be9fe201c56688cdd274677747bd2c71a79c628d3c0e5d3062d80458577 Feb 20 00:26:12 crc kubenswrapper[5119]: I0220 00:26:12.335888 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg"] Feb 20 00:26:12 crc kubenswrapper[5119]: I0220 00:26:12.506580 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5"] Feb 20 00:26:13 crc kubenswrapper[5119]: I0220 00:26:13.189947 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" event={"ID":"b2966a3b-fe46-4c89-a42a-4efe55c81cf4","Type":"ContainerStarted","Data":"deb648bc022ca99cdfcdc66501ba9402bbfff6b80f65534874437158da2965b6"} Feb 20 00:26:13 crc kubenswrapper[5119]: I0220 00:26:13.203596 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" event={"ID":"80a12472-2909-402e-8aed-167b8ddc8adf","Type":"ContainerStarted","Data":"4fb87533d94e2abe3f7f1149896c12dd031098d58888b7d033df9ec5bce31c53"} Feb 20 00:26:13 crc kubenswrapper[5119]: I0220 00:26:13.203650 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" event={"ID":"80a12472-2909-402e-8aed-167b8ddc8adf","Type":"ContainerStarted","Data":"51616be9fe201c56688cdd274677747bd2c71a79c628d3c0e5d3062d80458577"} Feb 20 00:26:13 crc kubenswrapper[5119]: I0220 00:26:13.210777 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0202d41c-dd80-4473-9175-855d12a13230","Type":"ContainerStarted","Data":"5155700bca94024683c3141692fe27ddae105d1f5a28399dc240f00903450723"} Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.094766 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.55523157 podStartE2EDuration="34.094749505s" podCreationTimestamp="2026-02-20 00:25:40 +0000 UTC" firstStartedPulling="2026-02-20 00:25:43.509123578 +0000 UTC m=+925.488087870" lastFinishedPulling="2026-02-20 00:26:12.048641483 +0000 UTC m=+954.027605805" observedRunningTime="2026-02-20 00:26:13.234034392 +0000 UTC m=+955.212998684" watchObservedRunningTime="2026-02-20 00:26:14.094749505 +0000 UTC m=+956.073713797" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.097728 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg"] Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.103597 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.105604 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.106444 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.106476 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg"] Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.162465 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.162526 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/329d86eb-9fa7-44d2-aab2-b126b2c74320-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.162730 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.162859 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9b9z\" (UniqueName: \"kubernetes.io/projected/329d86eb-9fa7-44d2-aab2-b126b2c74320-kube-api-access-b9b9z\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.162929 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/329d86eb-9fa7-44d2-aab2-b126b2c74320-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.217416 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" event={"ID":"b2966a3b-fe46-4c89-a42a-4efe55c81cf4","Type":"ContainerStarted","Data":"5fff1f950d043bd90cfe3600b7e86d8890be973eeb66f70080e50878cebfa433"} Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.263713 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.264691 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9b9z\" (UniqueName: \"kubernetes.io/projected/329d86eb-9fa7-44d2-aab2-b126b2c74320-kube-api-access-b9b9z\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.264777 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/329d86eb-9fa7-44d2-aab2-b126b2c74320-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.264827 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.264862 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/329d86eb-9fa7-44d2-aab2-b126b2c74320-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: E0220 00:26:14.265024 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 20 00:26:14 crc kubenswrapper[5119]: E0220 00:26:14.265122 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-default-cloud1-sens-meter-proxy-tls podName:329d86eb-9fa7-44d2-aab2-b126b2c74320 nodeName:}" failed. No retries permitted until 2026-02-20 00:26:14.765078878 +0000 UTC m=+956.744043170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" (UID: "329d86eb-9fa7-44d2-aab2-b126b2c74320") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.265427 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/329d86eb-9fa7-44d2-aab2-b126b2c74320-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.266424 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/329d86eb-9fa7-44d2-aab2-b126b2c74320-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.271493 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.282271 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9b9z\" (UniqueName: \"kubernetes.io/projected/329d86eb-9fa7-44d2-aab2-b126b2c74320-kube-api-access-b9b9z\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: I0220 00:26:14.773194 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:14 crc kubenswrapper[5119]: E0220 00:26:14.773733 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 20 00:26:14 crc kubenswrapper[5119]: E0220 00:26:14.773814 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-default-cloud1-sens-meter-proxy-tls podName:329d86eb-9fa7-44d2-aab2-b126b2c74320 nodeName:}" failed. No retries permitted until 2026-02-20 00:26:15.773795878 +0000 UTC m=+957.752760170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" (UID: "329d86eb-9fa7-44d2-aab2-b126b2c74320") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 20 00:26:15 crc kubenswrapper[5119]: I0220 00:26:15.224940 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"96709d8a-7441-425a-b115-9d8d56a0c603","Type":"ContainerStarted","Data":"9347cd99de723784971f0896070abae2c143f69fb26f6bda9e85c022d2997d8f"} Feb 20 00:26:15 crc kubenswrapper[5119]: I0220 00:26:15.787758 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:15 crc kubenswrapper[5119]: I0220 00:26:15.880380 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/329d86eb-9fa7-44d2-aab2-b126b2c74320-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg\" (UID: \"329d86eb-9fa7-44d2-aab2-b126b2c74320\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:15 crc kubenswrapper[5119]: I0220 00:26:15.918314 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" Feb 20 00:26:16 crc kubenswrapper[5119]: I0220 00:26:16.380424 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg"] Feb 20 00:26:17 crc kubenswrapper[5119]: I0220 00:26:17.247726 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"96709d8a-7441-425a-b115-9d8d56a0c603","Type":"ContainerStarted","Data":"3180fe8be3286a6a8688880244ca224f7e97303118ffd5bd7308ed6a796042ec"} Feb 20 00:26:17 crc kubenswrapper[5119]: I0220 00:26:17.251510 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" event={"ID":"329d86eb-9fa7-44d2-aab2-b126b2c74320","Type":"ContainerStarted","Data":"11cd07d9c3c5534a02939f5fb75e5015d4451ebb5c85b91420f8324aa81bfed3"} Feb 20 00:26:18 crc kubenswrapper[5119]: I0220 00:26:18.007258 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.653153 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7"] Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.658787 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.660771 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.660919 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.663973 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7"] Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.767714 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8d58f754-9504-4e35-95f5-0264ce16e97a-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.767778 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d58f754-9504-4e35-95f5-0264ce16e97a-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.767843 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9wks\" (UniqueName: \"kubernetes.io/projected/8d58f754-9504-4e35-95f5-0264ce16e97a-kube-api-access-j9wks\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.767918 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8d58f754-9504-4e35-95f5-0264ce16e97a-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.869570 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8d58f754-9504-4e35-95f5-0264ce16e97a-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.869682 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8d58f754-9504-4e35-95f5-0264ce16e97a-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.869733 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d58f754-9504-4e35-95f5-0264ce16e97a-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.869789 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j9wks\" (UniqueName: \"kubernetes.io/projected/8d58f754-9504-4e35-95f5-0264ce16e97a-kube-api-access-j9wks\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.870763 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8d58f754-9504-4e35-95f5-0264ce16e97a-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.871354 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d58f754-9504-4e35-95f5-0264ce16e97a-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.876692 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8d58f754-9504-4e35-95f5-0264ce16e97a-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.891759 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9wks\" (UniqueName: \"kubernetes.io/projected/8d58f754-9504-4e35-95f5-0264ce16e97a-kube-api-access-j9wks\") pod \"default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7\" (UID: \"8d58f754-9504-4e35-95f5-0264ce16e97a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:20 crc kubenswrapper[5119]: I0220 00:26:20.989701 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" Feb 20 00:26:21 crc kubenswrapper[5119]: I0220 00:26:21.287224 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"96709d8a-7441-425a-b115-9d8d56a0c603","Type":"ContainerStarted","Data":"26e7815561636d2f8c9ba7a8811b78307ea6ef3b1ab25b4d04fddae13eb41f13"} Feb 20 00:26:21 crc kubenswrapper[5119]: I0220 00:26:21.306110 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" event={"ID":"b2966a3b-fe46-4c89-a42a-4efe55c81cf4","Type":"ContainerStarted","Data":"6fad9d1dffd70b051505c37b0ebf44de3ff42cf61b7cb065ff6158a5f914e5f0"} Feb 20 00:26:21 crc kubenswrapper[5119]: I0220 00:26:21.312871 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=20.099637896 podStartE2EDuration="28.312850606s" podCreationTimestamp="2026-02-20 00:25:53 +0000 UTC" firstStartedPulling="2026-02-20 00:26:12.174757987 +0000 UTC m=+954.153722279" lastFinishedPulling="2026-02-20 00:26:20.387970697 +0000 UTC m=+962.366934989" observedRunningTime="2026-02-20 00:26:21.311584922 +0000 UTC m=+963.290549214" watchObservedRunningTime="2026-02-20 00:26:21.312850606 +0000 UTC m=+963.291814888" Feb 20 00:26:21 crc kubenswrapper[5119]: I0220 00:26:21.324490 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" event={"ID":"329d86eb-9fa7-44d2-aab2-b126b2c74320","Type":"ContainerStarted","Data":"d191b7c98f062c5430985457645d02f2184682eac411b382f526bc94cca05324"} Feb 20 00:26:21 crc kubenswrapper[5119]: I0220 00:26:21.328518 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" event={"ID":"80a12472-2909-402e-8aed-167b8ddc8adf","Type":"ContainerStarted","Data":"5c1390ecbe1ab4f47bce08105ee8413dc48ce8d96bf288fa0ba393131c53ea34"} Feb 20 00:26:21 crc kubenswrapper[5119]: I0220 00:26:21.420063 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7"] Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.051531 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr"] Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.064737 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr"] Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.064875 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.066679 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.198664 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.198761 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.198792 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.198830 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72zxs\" (UniqueName: \"kubernetes.io/projected/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-kube-api-access-72zxs\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.300144 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-72zxs\" (UniqueName: \"kubernetes.io/projected/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-kube-api-access-72zxs\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.300203 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.300276 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.300310 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.301085 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.301458 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.309703 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.317905 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-72zxs\" (UniqueName: \"kubernetes.io/projected/eca14d1c-8e86-45f7-9adc-9d54b6953e1e-kube-api-access-72zxs\") pod \"default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr\" (UID: \"eca14d1c-8e86-45f7-9adc-9d54b6953e1e\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.338836 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" event={"ID":"8d58f754-9504-4e35-95f5-0264ce16e97a","Type":"ContainerStarted","Data":"23aa08fca5dc963c7ea1ffa88b8670538437846f5f8230d1a0a78efcd4a6a305"} Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.338884 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" event={"ID":"8d58f754-9504-4e35-95f5-0264ce16e97a","Type":"ContainerStarted","Data":"fb34896abb1172d4a836f94059f572e18c69d052365af658d13000eae33eff01"} Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.341589 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" event={"ID":"329d86eb-9fa7-44d2-aab2-b126b2c74320","Type":"ContainerStarted","Data":"e9d2427b31605f8a322e7484d039ba5a1eb09fa953728fc50f34709413cd8f3b"} Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.401388 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" Feb 20 00:26:22 crc kubenswrapper[5119]: I0220 00:26:22.864519 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr"] Feb 20 00:26:23 crc kubenswrapper[5119]: I0220 00:26:23.350754 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" event={"ID":"eca14d1c-8e86-45f7-9adc-9d54b6953e1e","Type":"ContainerStarted","Data":"f293de8e0e714ebe903d728d3ebfba824780f416dc17c186377012328313db54"} Feb 20 00:26:24 crc kubenswrapper[5119]: I0220 00:26:24.376275 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" event={"ID":"eca14d1c-8e86-45f7-9adc-9d54b6953e1e","Type":"ContainerStarted","Data":"8853ca833cb615566bc8d6c74c1ebec832bcf62fd0eb61061694a92f2c6dc3a2"} Feb 20 00:26:24 crc kubenswrapper[5119]: I0220 00:26:24.624272 5119 scope.go:117] "RemoveContainer" containerID="51463116e778b4174ca20273762680cc39cc30c6b2de294759e498fb64a5f71d" Feb 20 00:26:28 crc kubenswrapper[5119]: I0220 00:26:28.007253 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 20 00:26:28 crc kubenswrapper[5119]: I0220 00:26:28.073857 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 20 00:26:28 crc kubenswrapper[5119]: I0220 00:26:28.444872 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 20 00:26:33 crc kubenswrapper[5119]: I0220 00:26:33.869670 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-99glm"] Feb 20 00:26:33 crc kubenswrapper[5119]: I0220 00:26:33.870396 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" podUID="61678311-ca0e-4066-a8ff-d8e438fc7108" containerName="default-interconnect" containerID="cri-o://19373faf1b5ed81c275df47dbb55ad0167a2ab409f5db85c3034a9faa7695cd3" gracePeriod=30 Feb 20 00:26:34 crc kubenswrapper[5119]: I0220 00:26:34.477019 5119 generic.go:358] "Generic (PLEG): container finished" podID="61678311-ca0e-4066-a8ff-d8e438fc7108" containerID="19373faf1b5ed81c275df47dbb55ad0167a2ab409f5db85c3034a9faa7695cd3" exitCode=0 Feb 20 00:26:34 crc kubenswrapper[5119]: I0220 00:26:34.477100 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" event={"ID":"61678311-ca0e-4066-a8ff-d8e438fc7108","Type":"ContainerDied","Data":"19373faf1b5ed81c275df47dbb55ad0167a2ab409f5db85c3034a9faa7695cd3"} Feb 20 00:26:34 crc kubenswrapper[5119]: I0220 00:26:34.479340 5119 generic.go:358] "Generic (PLEG): container finished" podID="8d58f754-9504-4e35-95f5-0264ce16e97a" containerID="23aa08fca5dc963c7ea1ffa88b8670538437846f5f8230d1a0a78efcd4a6a305" exitCode=0 Feb 20 00:26:34 crc kubenswrapper[5119]: I0220 00:26:34.479379 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" event={"ID":"8d58f754-9504-4e35-95f5-0264ce16e97a","Type":"ContainerDied","Data":"23aa08fca5dc963c7ea1ffa88b8670538437846f5f8230d1a0a78efcd4a6a305"} Feb 20 00:26:34 crc kubenswrapper[5119]: I0220 00:26:34.481284 5119 generic.go:358] "Generic (PLEG): container finished" podID="329d86eb-9fa7-44d2-aab2-b126b2c74320" containerID="e9d2427b31605f8a322e7484d039ba5a1eb09fa953728fc50f34709413cd8f3b" exitCode=0 Feb 20 00:26:34 crc kubenswrapper[5119]: I0220 00:26:34.481332 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" event={"ID":"329d86eb-9fa7-44d2-aab2-b126b2c74320","Type":"ContainerDied","Data":"e9d2427b31605f8a322e7484d039ba5a1eb09fa953728fc50f34709413cd8f3b"} Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.494362 5119 generic.go:358] "Generic (PLEG): container finished" podID="b2966a3b-fe46-4c89-a42a-4efe55c81cf4" containerID="6fad9d1dffd70b051505c37b0ebf44de3ff42cf61b7cb065ff6158a5f914e5f0" exitCode=0 Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.494437 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" event={"ID":"b2966a3b-fe46-4c89-a42a-4efe55c81cf4","Type":"ContainerDied","Data":"6fad9d1dffd70b051505c37b0ebf44de3ff42cf61b7cb065ff6158a5f914e5f0"} Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.496803 5119 generic.go:358] "Generic (PLEG): container finished" podID="80a12472-2909-402e-8aed-167b8ddc8adf" containerID="5c1390ecbe1ab4f47bce08105ee8413dc48ce8d96bf288fa0ba393131c53ea34" exitCode=0 Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.496858 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" event={"ID":"80a12472-2909-402e-8aed-167b8ddc8adf","Type":"ContainerDied","Data":"5c1390ecbe1ab4f47bce08105ee8413dc48ce8d96bf288fa0ba393131c53ea34"} Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.498233 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" event={"ID":"61678311-ca0e-4066-a8ff-d8e438fc7108","Type":"ContainerDied","Data":"b6aadc5455bf2f15ca8142a5fdcde1c4b8e73ad7d3c086215da20103777d1162"} Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.498264 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6aadc5455bf2f15ca8142a5fdcde1c4b8e73ad7d3c086215da20103777d1162" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.499834 5119 generic.go:358] "Generic (PLEG): container finished" podID="eca14d1c-8e86-45f7-9adc-9d54b6953e1e" containerID="8853ca833cb615566bc8d6c74c1ebec832bcf62fd0eb61061694a92f2c6dc3a2" exitCode=0 Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.499857 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" event={"ID":"eca14d1c-8e86-45f7-9adc-9d54b6953e1e","Type":"ContainerDied","Data":"8853ca833cb615566bc8d6c74c1ebec832bcf62fd0eb61061694a92f2c6dc3a2"} Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.541182 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.597924 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-l4jtl"] Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.598910 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61678311-ca0e-4066-a8ff-d8e438fc7108" containerName="default-interconnect" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.598933 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="61678311-ca0e-4066-a8ff-d8e438fc7108" containerName="default-interconnect" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.599052 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="61678311-ca0e-4066-a8ff-d8e438fc7108" containerName="default-interconnect" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.602323 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.605968 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-l4jtl"] Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648149 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-ca\") pod \"61678311-ca0e-4066-a8ff-d8e438fc7108\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648218 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nrkp\" (UniqueName: \"kubernetes.io/projected/61678311-ca0e-4066-a8ff-d8e438fc7108-kube-api-access-6nrkp\") pod \"61678311-ca0e-4066-a8ff-d8e438fc7108\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648239 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-config\") pod \"61678311-ca0e-4066-a8ff-d8e438fc7108\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648342 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-credentials\") pod \"61678311-ca0e-4066-a8ff-d8e438fc7108\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648406 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-users\") pod \"61678311-ca0e-4066-a8ff-d8e438fc7108\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648474 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-ca\") pod \"61678311-ca0e-4066-a8ff-d8e438fc7108\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648497 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-credentials\") pod \"61678311-ca0e-4066-a8ff-d8e438fc7108\" (UID: \"61678311-ca0e-4066-a8ff-d8e438fc7108\") " Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648712 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-sasl-users\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648745 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/534a6547-d0c6-4c15-983a-3adf0be07a15-sasl-config\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648762 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648816 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648864 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648900 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.648974 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr8dc\" (UniqueName: \"kubernetes.io/projected/534a6547-d0c6-4c15-983a-3adf0be07a15-kube-api-access-qr8dc\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.650106 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "61678311-ca0e-4066-a8ff-d8e438fc7108" (UID: "61678311-ca0e-4066-a8ff-d8e438fc7108"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.655784 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61678311-ca0e-4066-a8ff-d8e438fc7108-kube-api-access-6nrkp" (OuterVolumeSpecName: "kube-api-access-6nrkp") pod "61678311-ca0e-4066-a8ff-d8e438fc7108" (UID: "61678311-ca0e-4066-a8ff-d8e438fc7108"). InnerVolumeSpecName "kube-api-access-6nrkp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.655927 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "61678311-ca0e-4066-a8ff-d8e438fc7108" (UID: "61678311-ca0e-4066-a8ff-d8e438fc7108"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.656080 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "61678311-ca0e-4066-a8ff-d8e438fc7108" (UID: "61678311-ca0e-4066-a8ff-d8e438fc7108"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.656184 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "61678311-ca0e-4066-a8ff-d8e438fc7108" (UID: "61678311-ca0e-4066-a8ff-d8e438fc7108"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.657680 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "61678311-ca0e-4066-a8ff-d8e438fc7108" (UID: "61678311-ca0e-4066-a8ff-d8e438fc7108"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.659735 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "61678311-ca0e-4066-a8ff-d8e438fc7108" (UID: "61678311-ca0e-4066-a8ff-d8e438fc7108"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.750325 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-sasl-users\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.750376 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/534a6547-d0c6-4c15-983a-3adf0be07a15-sasl-config\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751276 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/534a6547-d0c6-4c15-983a-3adf0be07a15-sasl-config\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751329 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751387 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751437 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751475 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751588 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr8dc\" (UniqueName: \"kubernetes.io/projected/534a6547-d0c6-4c15-983a-3adf0be07a15-kube-api-access-qr8dc\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751652 5119 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751667 5119 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751678 5119 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751692 5119 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751705 5119 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/61678311-ca0e-4066-a8ff-d8e438fc7108-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751717 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6nrkp\" (UniqueName: \"kubernetes.io/projected/61678311-ca0e-4066-a8ff-d8e438fc7108-kube-api-access-6nrkp\") on node \"crc\" DevicePath \"\"" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.751728 5119 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/61678311-ca0e-4066-a8ff-d8e438fc7108-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.756496 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.756494 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.757005 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.758101 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-sasl-users\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.759276 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/534a6547-d0c6-4c15-983a-3adf0be07a15-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.778575 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr8dc\" (UniqueName: \"kubernetes.io/projected/534a6547-d0c6-4c15-983a-3adf0be07a15-kube-api-access-qr8dc\") pod \"default-interconnect-55bf8d5cb-l4jtl\" (UID: \"534a6547-d0c6-4c15-983a-3adf0be07a15\") " pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:35 crc kubenswrapper[5119]: I0220 00:26:35.928514 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.416040 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-l4jtl"] Feb 20 00:26:36 crc kubenswrapper[5119]: W0220 00:26:36.432333 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod534a6547_d0c6_4c15_983a_3adf0be07a15.slice/crio-3456604d695d98d7d9cd0df5a10dc8fc76a625345e5818179cd99241bba50153 WatchSource:0}: Error finding container 3456604d695d98d7d9cd0df5a10dc8fc76a625345e5818179cd99241bba50153: Status 404 returned error can't find the container with id 3456604d695d98d7d9cd0df5a10dc8fc76a625345e5818179cd99241bba50153 Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.516690 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" event={"ID":"80a12472-2909-402e-8aed-167b8ddc8adf","Type":"ContainerStarted","Data":"4068a5c1cac22eac589e2b0ff3587b844dd8e1f7028d8707ca6910d7d846ab63"} Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.517350 5119 scope.go:117] "RemoveContainer" containerID="5c1390ecbe1ab4f47bce08105ee8413dc48ce8d96bf288fa0ba393131c53ea34" Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.520829 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" event={"ID":"eca14d1c-8e86-45f7-9adc-9d54b6953e1e","Type":"ContainerStarted","Data":"277198870e5a81744cffe87c21a8e5eb8c5b4e76645bc8c184972018fc8748d4"} Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.521163 5119 scope.go:117] "RemoveContainer" containerID="8853ca833cb615566bc8d6c74c1ebec832bcf62fd0eb61061694a92f2c6dc3a2" Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.524482 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" event={"ID":"534a6547-d0c6-4c15-983a-3adf0be07a15","Type":"ContainerStarted","Data":"3456604d695d98d7d9cd0df5a10dc8fc76a625345e5818179cd99241bba50153"} Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.529136 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" event={"ID":"b2966a3b-fe46-4c89-a42a-4efe55c81cf4","Type":"ContainerStarted","Data":"376b43914456ec116b9eb1742fa60775679057409488a45c63a72d72e41d222b"} Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.530290 5119 scope.go:117] "RemoveContainer" containerID="6fad9d1dffd70b051505c37b0ebf44de3ff42cf61b7cb065ff6158a5f914e5f0" Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.534466 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" event={"ID":"8d58f754-9504-4e35-95f5-0264ce16e97a","Type":"ContainerStarted","Data":"fc61211e15edb7223bebc2b65a9bd8ba044a8f7412d0b9effa21f1abfef155a2"} Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.535031 5119 scope.go:117] "RemoveContainer" containerID="23aa08fca5dc963c7ea1ffa88b8670538437846f5f8230d1a0a78efcd4a6a305" Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.538623 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-99glm" Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.538689 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" event={"ID":"329d86eb-9fa7-44d2-aab2-b126b2c74320","Type":"ContainerStarted","Data":"a713424b1561c9f9ee832078ef2101c0b33113ddec16eba2ead1b5496b411f2e"} Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.538977 5119 scope.go:117] "RemoveContainer" containerID="e9d2427b31605f8a322e7484d039ba5a1eb09fa953728fc50f34709413cd8f3b" Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.653229 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-99glm"] Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.657809 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-99glm"] Feb 20 00:26:36 crc kubenswrapper[5119]: I0220 00:26:36.865446 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61678311-ca0e-4066-a8ff-d8e438fc7108" path="/var/lib/kubelet/pods/61678311-ca0e-4066-a8ff-d8e438fc7108/volumes" Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.549993 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" event={"ID":"80a12472-2909-402e-8aed-167b8ddc8adf","Type":"ContainerStarted","Data":"7f01862852b0d83e61475084dea0f8d86a404fdc228d1f622bb7fd4ec83bdb91"} Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.553012 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" event={"ID":"eca14d1c-8e86-45f7-9adc-9d54b6953e1e","Type":"ContainerStarted","Data":"9f5ba9231be4df910c23bcd3527be0c98de582396ea5db85a8d64cf399d3b3b1"} Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.555167 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" event={"ID":"534a6547-d0c6-4c15-983a-3adf0be07a15","Type":"ContainerStarted","Data":"3d9beece78555dda1898ba75748d6b892170fbdf3e3036fffc7d13cf2a64a6e9"} Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.560224 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" event={"ID":"b2966a3b-fe46-4c89-a42a-4efe55c81cf4","Type":"ContainerStarted","Data":"467606546bc53d5e2e516bea5f62edf6a0d8e0121fc941ef4e0de1b48f01d063"} Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.563946 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" event={"ID":"8d58f754-9504-4e35-95f5-0264ce16e97a","Type":"ContainerStarted","Data":"b2adee4102be1bb1203d2ed2c675affc137e70c7d9c9b2ab8c32c6e596015547"} Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.569069 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" event={"ID":"329d86eb-9fa7-44d2-aab2-b126b2c74320","Type":"ContainerStarted","Data":"896e2365f5cbba87e99b20a2fb29fb16d7f98135e1fbe053058d12b5cefb0410"} Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.578301 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" podStartSLOduration=4.91246862 podStartE2EDuration="29.578269833s" podCreationTimestamp="2026-02-20 00:26:08 +0000 UTC" firstStartedPulling="2026-02-20 00:26:12.337107886 +0000 UTC m=+954.316072178" lastFinishedPulling="2026-02-20 00:26:37.002909109 +0000 UTC m=+978.981873391" observedRunningTime="2026-02-20 00:26:37.568716055 +0000 UTC m=+979.547680367" watchObservedRunningTime="2026-02-20 00:26:37.578269833 +0000 UTC m=+979.557234165" Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.610036 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-l4jtl" podStartSLOduration=4.610021116 podStartE2EDuration="4.610021116s" podCreationTimestamp="2026-02-20 00:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:26:37.608077844 +0000 UTC m=+979.587042136" watchObservedRunningTime="2026-02-20 00:26:37.610021116 +0000 UTC m=+979.588985398" Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.644700 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" podStartSLOduration=3.212872943 podStartE2EDuration="27.644678399s" podCreationTimestamp="2026-02-20 00:26:10 +0000 UTC" firstStartedPulling="2026-02-20 00:26:12.511297914 +0000 UTC m=+954.490262216" lastFinishedPulling="2026-02-20 00:26:36.94310334 +0000 UTC m=+978.922067672" observedRunningTime="2026-02-20 00:26:37.642008077 +0000 UTC m=+979.620972369" watchObservedRunningTime="2026-02-20 00:26:37.644678399 +0000 UTC m=+979.623642691" Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.680074 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" podStartSLOduration=1.5708893800000001 podStartE2EDuration="15.680047401s" podCreationTimestamp="2026-02-20 00:26:22 +0000 UTC" firstStartedPulling="2026-02-20 00:26:22.879965657 +0000 UTC m=+964.858929949" lastFinishedPulling="2026-02-20 00:26:36.989123638 +0000 UTC m=+978.968087970" observedRunningTime="2026-02-20 00:26:37.667925745 +0000 UTC m=+979.646890037" watchObservedRunningTime="2026-02-20 00:26:37.680047401 +0000 UTC m=+979.659011703" Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.712184 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" podStartSLOduration=3.095014281 podStartE2EDuration="23.712170355s" podCreationTimestamp="2026-02-20 00:26:14 +0000 UTC" firstStartedPulling="2026-02-20 00:26:16.40672249 +0000 UTC m=+958.385686782" lastFinishedPulling="2026-02-20 00:26:37.023878554 +0000 UTC m=+979.002842856" observedRunningTime="2026-02-20 00:26:37.710984964 +0000 UTC m=+979.689949256" watchObservedRunningTime="2026-02-20 00:26:37.712170355 +0000 UTC m=+979.691134647" Feb 20 00:26:37 crc kubenswrapper[5119]: I0220 00:26:37.714703 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" podStartSLOduration=2.194577782 podStartE2EDuration="17.714695703s" podCreationTimestamp="2026-02-20 00:26:20 +0000 UTC" firstStartedPulling="2026-02-20 00:26:21.435128946 +0000 UTC m=+963.414093238" lastFinishedPulling="2026-02-20 00:26:36.955246837 +0000 UTC m=+978.934211159" observedRunningTime="2026-02-20 00:26:37.691866599 +0000 UTC m=+979.670830911" watchObservedRunningTime="2026-02-20 00:26:37.714695703 +0000 UTC m=+979.693659995" Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.580325 5119 generic.go:358] "Generic (PLEG): container finished" podID="80a12472-2909-402e-8aed-167b8ddc8adf" containerID="7f01862852b0d83e61475084dea0f8d86a404fdc228d1f622bb7fd4ec83bdb91" exitCode=0 Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.580371 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" event={"ID":"80a12472-2909-402e-8aed-167b8ddc8adf","Type":"ContainerDied","Data":"7f01862852b0d83e61475084dea0f8d86a404fdc228d1f622bb7fd4ec83bdb91"} Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.580432 5119 scope.go:117] "RemoveContainer" containerID="5c1390ecbe1ab4f47bce08105ee8413dc48ce8d96bf288fa0ba393131c53ea34" Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.580996 5119 scope.go:117] "RemoveContainer" containerID="7f01862852b0d83e61475084dea0f8d86a404fdc228d1f622bb7fd4ec83bdb91" Feb 20 00:26:38 crc kubenswrapper[5119]: E0220 00:26:38.581329 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-gv8rg_service-telemetry(80a12472-2909-402e-8aed-167b8ddc8adf)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" podUID="80a12472-2909-402e-8aed-167b8ddc8adf" Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.590902 5119 generic.go:358] "Generic (PLEG): container finished" podID="eca14d1c-8e86-45f7-9adc-9d54b6953e1e" containerID="9f5ba9231be4df910c23bcd3527be0c98de582396ea5db85a8d64cf399d3b3b1" exitCode=0 Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.590993 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" event={"ID":"eca14d1c-8e86-45f7-9adc-9d54b6953e1e","Type":"ContainerDied","Data":"9f5ba9231be4df910c23bcd3527be0c98de582396ea5db85a8d64cf399d3b3b1"} Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.591437 5119 scope.go:117] "RemoveContainer" containerID="9f5ba9231be4df910c23bcd3527be0c98de582396ea5db85a8d64cf399d3b3b1" Feb 20 00:26:38 crc kubenswrapper[5119]: E0220 00:26:38.591769 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr_service-telemetry(eca14d1c-8e86-45f7-9adc-9d54b6953e1e)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" podUID="eca14d1c-8e86-45f7-9adc-9d54b6953e1e" Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.601402 5119 generic.go:358] "Generic (PLEG): container finished" podID="b2966a3b-fe46-4c89-a42a-4efe55c81cf4" containerID="467606546bc53d5e2e516bea5f62edf6a0d8e0121fc941ef4e0de1b48f01d063" exitCode=0 Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.601784 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" event={"ID":"b2966a3b-fe46-4c89-a42a-4efe55c81cf4","Type":"ContainerDied","Data":"467606546bc53d5e2e516bea5f62edf6a0d8e0121fc941ef4e0de1b48f01d063"} Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.602036 5119 scope.go:117] "RemoveContainer" containerID="467606546bc53d5e2e516bea5f62edf6a0d8e0121fc941ef4e0de1b48f01d063" Feb 20 00:26:38 crc kubenswrapper[5119]: E0220 00:26:38.602350 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5_service-telemetry(b2966a3b-fe46-4c89-a42a-4efe55c81cf4)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" podUID="b2966a3b-fe46-4c89-a42a-4efe55c81cf4" Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.603803 5119 generic.go:358] "Generic (PLEG): container finished" podID="8d58f754-9504-4e35-95f5-0264ce16e97a" containerID="b2adee4102be1bb1203d2ed2c675affc137e70c7d9c9b2ab8c32c6e596015547" exitCode=0 Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.603941 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" event={"ID":"8d58f754-9504-4e35-95f5-0264ce16e97a","Type":"ContainerDied","Data":"b2adee4102be1bb1203d2ed2c675affc137e70c7d9c9b2ab8c32c6e596015547"} Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.604398 5119 scope.go:117] "RemoveContainer" containerID="b2adee4102be1bb1203d2ed2c675affc137e70c7d9c9b2ab8c32c6e596015547" Feb 20 00:26:38 crc kubenswrapper[5119]: E0220 00:26:38.605036 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7_service-telemetry(8d58f754-9504-4e35-95f5-0264ce16e97a)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" podUID="8d58f754-9504-4e35-95f5-0264ce16e97a" Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.612653 5119 generic.go:358] "Generic (PLEG): container finished" podID="329d86eb-9fa7-44d2-aab2-b126b2c74320" containerID="896e2365f5cbba87e99b20a2fb29fb16d7f98135e1fbe053058d12b5cefb0410" exitCode=0 Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.612989 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" event={"ID":"329d86eb-9fa7-44d2-aab2-b126b2c74320","Type":"ContainerDied","Data":"896e2365f5cbba87e99b20a2fb29fb16d7f98135e1fbe053058d12b5cefb0410"} Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.613758 5119 scope.go:117] "RemoveContainer" containerID="896e2365f5cbba87e99b20a2fb29fb16d7f98135e1fbe053058d12b5cefb0410" Feb 20 00:26:38 crc kubenswrapper[5119]: E0220 00:26:38.613915 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg_service-telemetry(329d86eb-9fa7-44d2-aab2-b126b2c74320)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" podUID="329d86eb-9fa7-44d2-aab2-b126b2c74320" Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.624133 5119 scope.go:117] "RemoveContainer" containerID="8853ca833cb615566bc8d6c74c1ebec832bcf62fd0eb61061694a92f2c6dc3a2" Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.716040 5119 scope.go:117] "RemoveContainer" containerID="6fad9d1dffd70b051505c37b0ebf44de3ff42cf61b7cb065ff6158a5f914e5f0" Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.771286 5119 scope.go:117] "RemoveContainer" containerID="23aa08fca5dc963c7ea1ffa88b8670538437846f5f8230d1a0a78efcd4a6a305" Feb 20 00:26:38 crc kubenswrapper[5119]: I0220 00:26:38.814061 5119 scope.go:117] "RemoveContainer" containerID="e9d2427b31605f8a322e7484d039ba5a1eb09fa953728fc50f34709413cd8f3b" Feb 20 00:26:39 crc kubenswrapper[5119]: I0220 00:26:39.622556 5119 scope.go:117] "RemoveContainer" containerID="467606546bc53d5e2e516bea5f62edf6a0d8e0121fc941ef4e0de1b48f01d063" Feb 20 00:26:39 crc kubenswrapper[5119]: E0220 00:26:39.622879 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5_service-telemetry(b2966a3b-fe46-4c89-a42a-4efe55c81cf4)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" podUID="b2966a3b-fe46-4c89-a42a-4efe55c81cf4" Feb 20 00:26:39 crc kubenswrapper[5119]: I0220 00:26:39.625164 5119 scope.go:117] "RemoveContainer" containerID="b2adee4102be1bb1203d2ed2c675affc137e70c7d9c9b2ab8c32c6e596015547" Feb 20 00:26:39 crc kubenswrapper[5119]: E0220 00:26:39.625369 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7_service-telemetry(8d58f754-9504-4e35-95f5-0264ce16e97a)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" podUID="8d58f754-9504-4e35-95f5-0264ce16e97a" Feb 20 00:26:39 crc kubenswrapper[5119]: I0220 00:26:39.627928 5119 scope.go:117] "RemoveContainer" containerID="896e2365f5cbba87e99b20a2fb29fb16d7f98135e1fbe053058d12b5cefb0410" Feb 20 00:26:39 crc kubenswrapper[5119]: E0220 00:26:39.628236 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg_service-telemetry(329d86eb-9fa7-44d2-aab2-b126b2c74320)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" podUID="329d86eb-9fa7-44d2-aab2-b126b2c74320" Feb 20 00:26:39 crc kubenswrapper[5119]: I0220 00:26:39.630006 5119 scope.go:117] "RemoveContainer" containerID="7f01862852b0d83e61475084dea0f8d86a404fdc228d1f622bb7fd4ec83bdb91" Feb 20 00:26:39 crc kubenswrapper[5119]: E0220 00:26:39.630182 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-gv8rg_service-telemetry(80a12472-2909-402e-8aed-167b8ddc8adf)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" podUID="80a12472-2909-402e-8aed-167b8ddc8adf" Feb 20 00:26:39 crc kubenswrapper[5119]: I0220 00:26:39.632246 5119 scope.go:117] "RemoveContainer" containerID="9f5ba9231be4df910c23bcd3527be0c98de582396ea5db85a8d64cf399d3b3b1" Feb 20 00:26:39 crc kubenswrapper[5119]: E0220 00:26:39.632448 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr_service-telemetry(eca14d1c-8e86-45f7-9adc-9d54b6953e1e)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" podUID="eca14d1c-8e86-45f7-9adc-9d54b6953e1e" Feb 20 00:26:42 crc kubenswrapper[5119]: I0220 00:26:42.161377 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:26:42 crc kubenswrapper[5119]: I0220 00:26:42.162505 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:26:42 crc kubenswrapper[5119]: I0220 00:26:42.162694 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:26:42 crc kubenswrapper[5119]: I0220 00:26:42.163732 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89838faa3e23ccc0655e0096613e091fc8decdd475bfdc257b396ab6343fa8f7"} pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:26:42 crc kubenswrapper[5119]: I0220 00:26:42.163912 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" containerID="cri-o://89838faa3e23ccc0655e0096613e091fc8decdd475bfdc257b396ab6343fa8f7" gracePeriod=600 Feb 20 00:26:42 crc kubenswrapper[5119]: I0220 00:26:42.664522 5119 generic.go:358] "Generic (PLEG): container finished" podID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerID="89838faa3e23ccc0655e0096613e091fc8decdd475bfdc257b396ab6343fa8f7" exitCode=0 Feb 20 00:26:42 crc kubenswrapper[5119]: I0220 00:26:42.664585 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerDied","Data":"89838faa3e23ccc0655e0096613e091fc8decdd475bfdc257b396ab6343fa8f7"} Feb 20 00:26:42 crc kubenswrapper[5119]: I0220 00:26:42.665059 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerStarted","Data":"a542021eb09857f8ffb4ca1336b877f58653295632332fbc3653b725007eaa36"} Feb 20 00:26:42 crc kubenswrapper[5119]: I0220 00:26:42.665084 5119 scope.go:117] "RemoveContainer" containerID="6e692bfc0f8e3640cdfb629db9ce0f6fdd7db4e721f07aacfb3653d9f3057c7c" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.241890 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.250516 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.253276 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.253441 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.253795 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.318688 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/bf83dad3-4d5a-43c8-a005-6b7952dc889e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"bf83dad3-4d5a-43c8-a005-6b7952dc889e\") " pod="service-telemetry/qdr-test" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.319012 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9s2q\" (UniqueName: \"kubernetes.io/projected/bf83dad3-4d5a-43c8-a005-6b7952dc889e-kube-api-access-s9s2q\") pod \"qdr-test\" (UID: \"bf83dad3-4d5a-43c8-a005-6b7952dc889e\") " pod="service-telemetry/qdr-test" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.319059 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/bf83dad3-4d5a-43c8-a005-6b7952dc889e-qdr-test-config\") pod \"qdr-test\" (UID: \"bf83dad3-4d5a-43c8-a005-6b7952dc889e\") " pod="service-telemetry/qdr-test" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.420360 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s9s2q\" (UniqueName: \"kubernetes.io/projected/bf83dad3-4d5a-43c8-a005-6b7952dc889e-kube-api-access-s9s2q\") pod \"qdr-test\" (UID: \"bf83dad3-4d5a-43c8-a005-6b7952dc889e\") " pod="service-telemetry/qdr-test" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.420606 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/bf83dad3-4d5a-43c8-a005-6b7952dc889e-qdr-test-config\") pod \"qdr-test\" (UID: \"bf83dad3-4d5a-43c8-a005-6b7952dc889e\") " pod="service-telemetry/qdr-test" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.420845 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/bf83dad3-4d5a-43c8-a005-6b7952dc889e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"bf83dad3-4d5a-43c8-a005-6b7952dc889e\") " pod="service-telemetry/qdr-test" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.421465 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/bf83dad3-4d5a-43c8-a005-6b7952dc889e-qdr-test-config\") pod \"qdr-test\" (UID: \"bf83dad3-4d5a-43c8-a005-6b7952dc889e\") " pod="service-telemetry/qdr-test" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.429488 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/bf83dad3-4d5a-43c8-a005-6b7952dc889e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"bf83dad3-4d5a-43c8-a005-6b7952dc889e\") " pod="service-telemetry/qdr-test" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.440730 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9s2q\" (UniqueName: \"kubernetes.io/projected/bf83dad3-4d5a-43c8-a005-6b7952dc889e-kube-api-access-s9s2q\") pod \"qdr-test\" (UID: \"bf83dad3-4d5a-43c8-a005-6b7952dc889e\") " pod="service-telemetry/qdr-test" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.580107 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 20 00:26:47 crc kubenswrapper[5119]: I0220 00:26:47.824057 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 20 00:26:48 crc kubenswrapper[5119]: I0220 00:26:48.720167 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"bf83dad3-4d5a-43c8-a005-6b7952dc889e","Type":"ContainerStarted","Data":"4e0a4d0c36b24c2c31e20ccbae4753de41026abd9daac88399223da6b0d4caff"} Feb 20 00:26:49 crc kubenswrapper[5119]: I0220 00:26:49.856900 5119 scope.go:117] "RemoveContainer" containerID="467606546bc53d5e2e516bea5f62edf6a0d8e0121fc941ef4e0de1b48f01d063" Feb 20 00:26:51 crc kubenswrapper[5119]: I0220 00:26:51.857609 5119 scope.go:117] "RemoveContainer" containerID="7f01862852b0d83e61475084dea0f8d86a404fdc228d1f622bb7fd4ec83bdb91" Feb 20 00:26:51 crc kubenswrapper[5119]: I0220 00:26:51.859088 5119 scope.go:117] "RemoveContainer" containerID="896e2365f5cbba87e99b20a2fb29fb16d7f98135e1fbe053058d12b5cefb0410" Feb 20 00:26:52 crc kubenswrapper[5119]: I0220 00:26:52.870021 5119 scope.go:117] "RemoveContainer" containerID="b2adee4102be1bb1203d2ed2c675affc137e70c7d9c9b2ab8c32c6e596015547" Feb 20 00:26:54 crc kubenswrapper[5119]: I0220 00:26:54.867802 5119 scope.go:117] "RemoveContainer" containerID="9f5ba9231be4df910c23bcd3527be0c98de582396ea5db85a8d64cf399d3b3b1" Feb 20 00:26:55 crc kubenswrapper[5119]: I0220 00:26:55.771279 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr" event={"ID":"eca14d1c-8e86-45f7-9adc-9d54b6953e1e","Type":"ContainerStarted","Data":"b070dc44d24ae51699b8eda5669d9e06e6b0fb7a8b306e355a00cc7fad20ec36"} Feb 20 00:26:56 crc kubenswrapper[5119]: I0220 00:26:56.790601 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5" event={"ID":"b2966a3b-fe46-4c89-a42a-4efe55c81cf4","Type":"ContainerStarted","Data":"c1e04e9af504a57d1fcd8d1ef98a71302e0bc74d94f32c33a87789cf75da17fe"} Feb 20 00:26:56 crc kubenswrapper[5119]: I0220 00:26:56.794250 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7" event={"ID":"8d58f754-9504-4e35-95f5-0264ce16e97a","Type":"ContainerStarted","Data":"1688be1a4b6f22b46f027da5377896146888f3463f0cda1ca2d772af6c8604f8"} Feb 20 00:26:56 crc kubenswrapper[5119]: I0220 00:26:56.796106 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"bf83dad3-4d5a-43c8-a005-6b7952dc889e","Type":"ContainerStarted","Data":"b0a66c33d01816ef9b5a66e46211c7a858694002c50df4ce90c651ba9fc8aaca"} Feb 20 00:26:56 crc kubenswrapper[5119]: I0220 00:26:56.820052 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg" event={"ID":"329d86eb-9fa7-44d2-aab2-b126b2c74320","Type":"ContainerStarted","Data":"8914a7795ac60190af8ec7530cccf70e4a1c847fdc94f5999f86481c3de73055"} Feb 20 00:26:56 crc kubenswrapper[5119]: I0220 00:26:56.825361 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-gv8rg" event={"ID":"80a12472-2909-402e-8aed-167b8ddc8adf","Type":"ContainerStarted","Data":"fbc56df1d61a5c89ec2c487844bfe470f25d14e88676b5b12781537a3cc2c0d2"} Feb 20 00:26:56 crc kubenswrapper[5119]: I0220 00:26:56.843309 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.023933569 podStartE2EDuration="9.84329446s" podCreationTimestamp="2026-02-20 00:26:47 +0000 UTC" firstStartedPulling="2026-02-20 00:26:47.830138503 +0000 UTC m=+989.809102795" lastFinishedPulling="2026-02-20 00:26:55.649499394 +0000 UTC m=+997.628463686" observedRunningTime="2026-02-20 00:26:56.84033397 +0000 UTC m=+998.819298272" watchObservedRunningTime="2026-02-20 00:26:56.84329446 +0000 UTC m=+998.822258742" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.179089 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-x2c5z"] Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.225744 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-x2c5z"] Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.225769 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.228518 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.229065 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.229018 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.229712 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.233688 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.241192 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.270143 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn6p2\" (UniqueName: \"kubernetes.io/projected/41f8f602-8f2b-4207-b74a-71069fc43a51-kube-api-access-zn6p2\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.270380 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-healthcheck-log\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.270509 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.270693 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-config\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.270883 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-publisher\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.271035 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-sensubility-config\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.271191 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.372694 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zn6p2\" (UniqueName: \"kubernetes.io/projected/41f8f602-8f2b-4207-b74a-71069fc43a51-kube-api-access-zn6p2\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.373097 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-healthcheck-log\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.373254 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.374453 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-config\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.375409 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-publisher\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.376482 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-sensubility-config\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.376900 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.376417 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-publisher\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.375308 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-config\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.374361 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.377507 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-sensubility-config\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.378778 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.379070 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-healthcheck-log\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.404095 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn6p2\" (UniqueName: \"kubernetes.io/projected/41f8f602-8f2b-4207-b74a-71069fc43a51-kube-api-access-zn6p2\") pod \"stf-smoketest-smoke1-x2c5z\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.549018 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.635875 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.648138 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.648265 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.782513 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22tqz\" (UniqueName: \"kubernetes.io/projected/65e76b3f-470d-4aac-9270-87437a5619c9-kube-api-access-22tqz\") pod \"curl\" (UID: \"65e76b3f-470d-4aac-9270-87437a5619c9\") " pod="service-telemetry/curl" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.804290 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-x2c5z"] Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.835849 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-x2c5z" event={"ID":"41f8f602-8f2b-4207-b74a-71069fc43a51","Type":"ContainerStarted","Data":"cc7db0093c3802799ed4963d1e130b7baa9015ccfc4a2732c4ac1075604b0ae8"} Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.884674 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-22tqz\" (UniqueName: \"kubernetes.io/projected/65e76b3f-470d-4aac-9270-87437a5619c9-kube-api-access-22tqz\") pod \"curl\" (UID: \"65e76b3f-470d-4aac-9270-87437a5619c9\") " pod="service-telemetry/curl" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.902178 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-22tqz\" (UniqueName: \"kubernetes.io/projected/65e76b3f-470d-4aac-9270-87437a5619c9-kube-api-access-22tqz\") pod \"curl\" (UID: \"65e76b3f-470d-4aac-9270-87437a5619c9\") " pod="service-telemetry/curl" Feb 20 00:26:57 crc kubenswrapper[5119]: I0220 00:26:57.966553 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 20 00:26:58 crc kubenswrapper[5119]: I0220 00:26:58.211355 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 20 00:26:58 crc kubenswrapper[5119]: W0220 00:26:58.215323 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65e76b3f_470d_4aac_9270_87437a5619c9.slice/crio-8244d781882002a6079f2c9d37f3dd6e32a2c7b2ef7799711772ecca1d3d2cb8 WatchSource:0}: Error finding container 8244d781882002a6079f2c9d37f3dd6e32a2c7b2ef7799711772ecca1d3d2cb8: Status 404 returned error can't find the container with id 8244d781882002a6079f2c9d37f3dd6e32a2c7b2ef7799711772ecca1d3d2cb8 Feb 20 00:26:58 crc kubenswrapper[5119]: I0220 00:26:58.844337 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"65e76b3f-470d-4aac-9270-87437a5619c9","Type":"ContainerStarted","Data":"8244d781882002a6079f2c9d37f3dd6e32a2c7b2ef7799711772ecca1d3d2cb8"} Feb 20 00:27:00 crc kubenswrapper[5119]: I0220 00:27:00.866247 5119 generic.go:358] "Generic (PLEG): container finished" podID="65e76b3f-470d-4aac-9270-87437a5619c9" containerID="6cc8b8fba38d5d27d4e136c27dc38a0e691cb52e55c0f637f86bb5d08bb2a7bb" exitCode=0 Feb 20 00:27:00 crc kubenswrapper[5119]: I0220 00:27:00.866305 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"65e76b3f-470d-4aac-9270-87437a5619c9","Type":"ContainerDied","Data":"6cc8b8fba38d5d27d4e136c27dc38a0e691cb52e55c0f637f86bb5d08bb2a7bb"} Feb 20 00:27:06 crc kubenswrapper[5119]: I0220 00:27:06.979245 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 20 00:27:07 crc kubenswrapper[5119]: I0220 00:27:07.104576 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22tqz\" (UniqueName: \"kubernetes.io/projected/65e76b3f-470d-4aac-9270-87437a5619c9-kube-api-access-22tqz\") pod \"65e76b3f-470d-4aac-9270-87437a5619c9\" (UID: \"65e76b3f-470d-4aac-9270-87437a5619c9\") " Feb 20 00:27:07 crc kubenswrapper[5119]: I0220 00:27:07.119939 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e76b3f-470d-4aac-9270-87437a5619c9-kube-api-access-22tqz" (OuterVolumeSpecName: "kube-api-access-22tqz") pod "65e76b3f-470d-4aac-9270-87437a5619c9" (UID: "65e76b3f-470d-4aac-9270-87437a5619c9"). InnerVolumeSpecName "kube-api-access-22tqz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:27:07 crc kubenswrapper[5119]: I0220 00:27:07.135511 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_65e76b3f-470d-4aac-9270-87437a5619c9/curl/0.log" Feb 20 00:27:07 crc kubenswrapper[5119]: I0220 00:27:07.206311 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-22tqz\" (UniqueName: \"kubernetes.io/projected/65e76b3f-470d-4aac-9270-87437a5619c9-kube-api-access-22tqz\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:07 crc kubenswrapper[5119]: I0220 00:27:07.410506 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-jc2zw_4e23a4f9-0ec1-4b2f-b886-0bced8fe7442/prometheus-webhook-snmp/0.log" Feb 20 00:27:07 crc kubenswrapper[5119]: I0220 00:27:07.922901 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 20 00:27:07 crc kubenswrapper[5119]: I0220 00:27:07.923003 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"65e76b3f-470d-4aac-9270-87437a5619c9","Type":"ContainerDied","Data":"8244d781882002a6079f2c9d37f3dd6e32a2c7b2ef7799711772ecca1d3d2cb8"} Feb 20 00:27:07 crc kubenswrapper[5119]: I0220 00:27:07.923260 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8244d781882002a6079f2c9d37f3dd6e32a2c7b2ef7799711772ecca1d3d2cb8" Feb 20 00:27:08 crc kubenswrapper[5119]: I0220 00:27:08.932133 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-x2c5z" event={"ID":"41f8f602-8f2b-4207-b74a-71069fc43a51","Type":"ContainerStarted","Data":"9ab52925763964f11627a35d62c4035bc9efa35a6e6853a803d04766f079c458"} Feb 20 00:27:14 crc kubenswrapper[5119]: I0220 00:27:14.975019 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-x2c5z" event={"ID":"41f8f602-8f2b-4207-b74a-71069fc43a51","Type":"ContainerStarted","Data":"57c57658109dc482649776dfae093dd4c0a7ed06ee2dff7cac3547f6a1d0236a"} Feb 20 00:27:15 crc kubenswrapper[5119]: I0220 00:27:15.009662 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-x2c5z" podStartSLOduration=1.629514959 podStartE2EDuration="18.009644552s" podCreationTimestamp="2026-02-20 00:26:57 +0000 UTC" firstStartedPulling="2026-02-20 00:26:57.811565747 +0000 UTC m=+999.790530039" lastFinishedPulling="2026-02-20 00:27:14.19169532 +0000 UTC m=+1016.170659632" observedRunningTime="2026-02-20 00:27:15.00029239 +0000 UTC m=+1016.979256752" watchObservedRunningTime="2026-02-20 00:27:15.009644552 +0000 UTC m=+1016.988608854" Feb 20 00:27:37 crc kubenswrapper[5119]: I0220 00:27:37.602971 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-jc2zw_4e23a4f9-0ec1-4b2f-b886-0bced8fe7442/prometheus-webhook-snmp/0.log" Feb 20 00:27:43 crc kubenswrapper[5119]: I0220 00:27:43.269087 5119 generic.go:358] "Generic (PLEG): container finished" podID="41f8f602-8f2b-4207-b74a-71069fc43a51" containerID="9ab52925763964f11627a35d62c4035bc9efa35a6e6853a803d04766f079c458" exitCode=0 Feb 20 00:27:43 crc kubenswrapper[5119]: I0220 00:27:43.269221 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-x2c5z" event={"ID":"41f8f602-8f2b-4207-b74a-71069fc43a51","Type":"ContainerDied","Data":"9ab52925763964f11627a35d62c4035bc9efa35a6e6853a803d04766f079c458"} Feb 20 00:27:43 crc kubenswrapper[5119]: I0220 00:27:43.270789 5119 scope.go:117] "RemoveContainer" containerID="9ab52925763964f11627a35d62c4035bc9efa35a6e6853a803d04766f079c458" Feb 20 00:27:47 crc kubenswrapper[5119]: I0220 00:27:47.313566 5119 generic.go:358] "Generic (PLEG): container finished" podID="41f8f602-8f2b-4207-b74a-71069fc43a51" containerID="57c57658109dc482649776dfae093dd4c0a7ed06ee2dff7cac3547f6a1d0236a" exitCode=0 Feb 20 00:27:47 crc kubenswrapper[5119]: I0220 00:27:47.313663 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-x2c5z" event={"ID":"41f8f602-8f2b-4207-b74a-71069fc43a51","Type":"ContainerDied","Data":"57c57658109dc482649776dfae093dd4c0a7ed06ee2dff7cac3547f6a1d0236a"} Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.714718 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.835093 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn6p2\" (UniqueName: \"kubernetes.io/projected/41f8f602-8f2b-4207-b74a-71069fc43a51-kube-api-access-zn6p2\") pod \"41f8f602-8f2b-4207-b74a-71069fc43a51\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.835141 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-entrypoint-script\") pod \"41f8f602-8f2b-4207-b74a-71069fc43a51\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.835174 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-entrypoint-script\") pod \"41f8f602-8f2b-4207-b74a-71069fc43a51\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.835198 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-sensubility-config\") pod \"41f8f602-8f2b-4207-b74a-71069fc43a51\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.835222 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-publisher\") pod \"41f8f602-8f2b-4207-b74a-71069fc43a51\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.835320 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-config\") pod \"41f8f602-8f2b-4207-b74a-71069fc43a51\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.835394 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-healthcheck-log\") pod \"41f8f602-8f2b-4207-b74a-71069fc43a51\" (UID: \"41f8f602-8f2b-4207-b74a-71069fc43a51\") " Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.844274 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f8f602-8f2b-4207-b74a-71069fc43a51-kube-api-access-zn6p2" (OuterVolumeSpecName: "kube-api-access-zn6p2") pod "41f8f602-8f2b-4207-b74a-71069fc43a51" (UID: "41f8f602-8f2b-4207-b74a-71069fc43a51"). InnerVolumeSpecName "kube-api-access-zn6p2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.856848 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "41f8f602-8f2b-4207-b74a-71069fc43a51" (UID: "41f8f602-8f2b-4207-b74a-71069fc43a51"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.859845 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "41f8f602-8f2b-4207-b74a-71069fc43a51" (UID: "41f8f602-8f2b-4207-b74a-71069fc43a51"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.859923 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "41f8f602-8f2b-4207-b74a-71069fc43a51" (UID: "41f8f602-8f2b-4207-b74a-71069fc43a51"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.886703 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "41f8f602-8f2b-4207-b74a-71069fc43a51" (UID: "41f8f602-8f2b-4207-b74a-71069fc43a51"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.888234 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "41f8f602-8f2b-4207-b74a-71069fc43a51" (UID: "41f8f602-8f2b-4207-b74a-71069fc43a51"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.891943 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "41f8f602-8f2b-4207-b74a-71069fc43a51" (UID: "41f8f602-8f2b-4207-b74a-71069fc43a51"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.938146 5119 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.938191 5119 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.938249 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zn6p2\" (UniqueName: \"kubernetes.io/projected/41f8f602-8f2b-4207-b74a-71069fc43a51-kube-api-access-zn6p2\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.938273 5119 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.938291 5119 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.938308 5119 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:48 crc kubenswrapper[5119]: I0220 00:27:48.938325 5119 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/41f8f602-8f2b-4207-b74a-71069fc43a51-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:49 crc kubenswrapper[5119]: I0220 00:27:49.338509 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-x2c5z" Feb 20 00:27:49 crc kubenswrapper[5119]: I0220 00:27:49.338516 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-x2c5z" event={"ID":"41f8f602-8f2b-4207-b74a-71069fc43a51","Type":"ContainerDied","Data":"cc7db0093c3802799ed4963d1e130b7baa9015ccfc4a2732c4ac1075604b0ae8"} Feb 20 00:27:49 crc kubenswrapper[5119]: I0220 00:27:49.338579 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc7db0093c3802799ed4963d1e130b7baa9015ccfc4a2732c4ac1075604b0ae8" Feb 20 00:27:50 crc kubenswrapper[5119]: I0220 00:27:50.830450 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-x2c5z_41f8f602-8f2b-4207-b74a-71069fc43a51/smoketest-collectd/0.log" Feb 20 00:27:51 crc kubenswrapper[5119]: I0220 00:27:51.138771 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-x2c5z_41f8f602-8f2b-4207-b74a-71069fc43a51/smoketest-ceilometer/0.log" Feb 20 00:27:51 crc kubenswrapper[5119]: I0220 00:27:51.443256 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-l4jtl_534a6547-d0c6-4c15-983a-3adf0be07a15/default-interconnect/0.log" Feb 20 00:27:51 crc kubenswrapper[5119]: I0220 00:27:51.719130 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-gv8rg_80a12472-2909-402e-8aed-167b8ddc8adf/bridge/2.log" Feb 20 00:27:51 crc kubenswrapper[5119]: I0220 00:27:51.991578 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-gv8rg_80a12472-2909-402e-8aed-167b8ddc8adf/sg-core/0.log" Feb 20 00:27:52 crc kubenswrapper[5119]: I0220 00:27:52.232893 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7_8d58f754-9504-4e35-95f5-0264ce16e97a/bridge/2.log" Feb 20 00:27:52 crc kubenswrapper[5119]: I0220 00:27:52.505644 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-58689f4f5c-xv6r7_8d58f754-9504-4e35-95f5-0264ce16e97a/sg-core/0.log" Feb 20 00:27:52 crc kubenswrapper[5119]: I0220 00:27:52.763423 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5_b2966a3b-fe46-4c89-a42a-4efe55c81cf4/bridge/2.log" Feb 20 00:27:53 crc kubenswrapper[5119]: I0220 00:27:53.072477 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-vwrz5_b2966a3b-fe46-4c89-a42a-4efe55c81cf4/sg-core/0.log" Feb 20 00:27:53 crc kubenswrapper[5119]: I0220 00:27:53.376180 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr_eca14d1c-8e86-45f7-9adc-9d54b6953e1e/bridge/2.log" Feb 20 00:27:53 crc kubenswrapper[5119]: I0220 00:27:53.669852 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-6d587b6dc7-tj9mr_eca14d1c-8e86-45f7-9adc-9d54b6953e1e/sg-core/0.log" Feb 20 00:27:53 crc kubenswrapper[5119]: I0220 00:27:53.956886 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg_329d86eb-9fa7-44d2-aab2-b126b2c74320/bridge/2.log" Feb 20 00:27:54 crc kubenswrapper[5119]: I0220 00:27:54.302478 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zb9gg_329d86eb-9fa7-44d2-aab2-b126b2c74320/sg-core/0.log" Feb 20 00:27:58 crc kubenswrapper[5119]: I0220 00:27:58.041025 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-zjbls_eadb8db0-384b-4ee4-9857-ebd4c7e7600b/operator/0.log" Feb 20 00:27:58 crc kubenswrapper[5119]: I0220 00:27:58.465004 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_0202d41c-dd80-4473-9175-855d12a13230/prometheus/0.log" Feb 20 00:27:58 crc kubenswrapper[5119]: I0220 00:27:58.752161 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_171ea291-6d79-46a5-aa1a-02eb579d0774/elasticsearch/0.log" Feb 20 00:27:59 crc kubenswrapper[5119]: I0220 00:27:59.018433 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-jc2zw_4e23a4f9-0ec1-4b2f-b886-0bced8fe7442/prometheus-webhook-snmp/0.log" Feb 20 00:27:59 crc kubenswrapper[5119]: I0220 00:27:59.319437 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_96709d8a-7441-425a-b115-9d8d56a0c603/alertmanager/0.log" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.158674 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525788-6x4mn"] Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.162292 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41f8f602-8f2b-4207-b74a-71069fc43a51" containerName="smoketest-ceilometer" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.162337 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f8f602-8f2b-4207-b74a-71069fc43a51" containerName="smoketest-ceilometer" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.162392 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="65e76b3f-470d-4aac-9270-87437a5619c9" containerName="curl" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.162409 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e76b3f-470d-4aac-9270-87437a5619c9" containerName="curl" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.162476 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41f8f602-8f2b-4207-b74a-71069fc43a51" containerName="smoketest-collectd" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.162493 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f8f602-8f2b-4207-b74a-71069fc43a51" containerName="smoketest-collectd" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.162790 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="65e76b3f-470d-4aac-9270-87437a5619c9" containerName="curl" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.162831 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="41f8f602-8f2b-4207-b74a-71069fc43a51" containerName="smoketest-collectd" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.162867 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="41f8f602-8f2b-4207-b74a-71069fc43a51" containerName="smoketest-ceilometer" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.173446 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525788-6x4mn" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.177641 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.177921 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.178371 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-nmc85\"" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.179015 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525788-6x4mn"] Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.208452 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd9b6\" (UniqueName: \"kubernetes.io/projected/68f070b6-1c0e-49b5-bbc8-938761ce678c-kube-api-access-dd9b6\") pod \"auto-csr-approver-29525788-6x4mn\" (UID: \"68f070b6-1c0e-49b5-bbc8-938761ce678c\") " pod="openshift-infra/auto-csr-approver-29525788-6x4mn" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.311308 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dd9b6\" (UniqueName: \"kubernetes.io/projected/68f070b6-1c0e-49b5-bbc8-938761ce678c-kube-api-access-dd9b6\") pod \"auto-csr-approver-29525788-6x4mn\" (UID: \"68f070b6-1c0e-49b5-bbc8-938761ce678c\") " pod="openshift-infra/auto-csr-approver-29525788-6x4mn" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.349478 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd9b6\" (UniqueName: \"kubernetes.io/projected/68f070b6-1c0e-49b5-bbc8-938761ce678c-kube-api-access-dd9b6\") pod \"auto-csr-approver-29525788-6x4mn\" (UID: \"68f070b6-1c0e-49b5-bbc8-938761ce678c\") " pod="openshift-infra/auto-csr-approver-29525788-6x4mn" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.525952 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525788-6x4mn" Feb 20 00:28:00 crc kubenswrapper[5119]: I0220 00:28:00.764590 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525788-6x4mn"] Feb 20 00:28:01 crc kubenswrapper[5119]: I0220 00:28:01.458336 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525788-6x4mn" event={"ID":"68f070b6-1c0e-49b5-bbc8-938761ce678c","Type":"ContainerStarted","Data":"2ec26995584184998b2416815eb86ae7bdc6452fb0a1b8431e9f56eab74c795a"} Feb 20 00:28:02 crc kubenswrapper[5119]: I0220 00:28:02.478833 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525788-6x4mn" event={"ID":"68f070b6-1c0e-49b5-bbc8-938761ce678c","Type":"ContainerStarted","Data":"ba816cc0a1dfa0ef7f3c1b60e1cab343709a80678747b67a264096ce725c7020"} Feb 20 00:28:03 crc kubenswrapper[5119]: I0220 00:28:03.487383 5119 generic.go:358] "Generic (PLEG): container finished" podID="68f070b6-1c0e-49b5-bbc8-938761ce678c" containerID="ba816cc0a1dfa0ef7f3c1b60e1cab343709a80678747b67a264096ce725c7020" exitCode=0 Feb 20 00:28:03 crc kubenswrapper[5119]: I0220 00:28:03.487471 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525788-6x4mn" event={"ID":"68f070b6-1c0e-49b5-bbc8-938761ce678c","Type":"ContainerDied","Data":"ba816cc0a1dfa0ef7f3c1b60e1cab343709a80678747b67a264096ce725c7020"} Feb 20 00:28:04 crc kubenswrapper[5119]: I0220 00:28:04.866468 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525788-6x4mn" Feb 20 00:28:04 crc kubenswrapper[5119]: I0220 00:28:04.989885 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd9b6\" (UniqueName: \"kubernetes.io/projected/68f070b6-1c0e-49b5-bbc8-938761ce678c-kube-api-access-dd9b6\") pod \"68f070b6-1c0e-49b5-bbc8-938761ce678c\" (UID: \"68f070b6-1c0e-49b5-bbc8-938761ce678c\") " Feb 20 00:28:05 crc kubenswrapper[5119]: I0220 00:28:04.999275 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f070b6-1c0e-49b5-bbc8-938761ce678c-kube-api-access-dd9b6" (OuterVolumeSpecName: "kube-api-access-dd9b6") pod "68f070b6-1c0e-49b5-bbc8-938761ce678c" (UID: "68f070b6-1c0e-49b5-bbc8-938761ce678c"). InnerVolumeSpecName "kube-api-access-dd9b6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:28:05 crc kubenswrapper[5119]: I0220 00:28:05.091941 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dd9b6\" (UniqueName: \"kubernetes.io/projected/68f070b6-1c0e-49b5-bbc8-938761ce678c-kube-api-access-dd9b6\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:05 crc kubenswrapper[5119]: I0220 00:28:05.509518 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525788-6x4mn" Feb 20 00:28:05 crc kubenswrapper[5119]: I0220 00:28:05.509594 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525788-6x4mn" event={"ID":"68f070b6-1c0e-49b5-bbc8-938761ce678c","Type":"ContainerDied","Data":"2ec26995584184998b2416815eb86ae7bdc6452fb0a1b8431e9f56eab74c795a"} Feb 20 00:28:05 crc kubenswrapper[5119]: I0220 00:28:05.510152 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ec26995584184998b2416815eb86ae7bdc6452fb0a1b8431e9f56eab74c795a" Feb 20 00:28:05 crc kubenswrapper[5119]: I0220 00:28:05.583757 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525782-7hlsv"] Feb 20 00:28:05 crc kubenswrapper[5119]: I0220 00:28:05.595240 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525782-7hlsv"] Feb 20 00:28:06 crc kubenswrapper[5119]: I0220 00:28:06.874256 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12d846a4-18b9-4b41-b783-4f7282c82065" path="/var/lib/kubelet/pods/12d846a4-18b9-4b41-b783-4f7282c82065/volumes" Feb 20 00:28:13 crc kubenswrapper[5119]: I0220 00:28:13.061146 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-794b5697c7-m88kv_206e4ff6-6191-4c11-94cf-765aa3158c2d/operator/0.log" Feb 20 00:28:16 crc kubenswrapper[5119]: I0220 00:28:16.166274 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-zjbls_eadb8db0-384b-4ee4-9857-ebd4c7e7600b/operator/0.log" Feb 20 00:28:16 crc kubenswrapper[5119]: I0220 00:28:16.440139 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_bf83dad3-4d5a-43c8-a005-6b7952dc889e/qdr/0.log" Feb 20 00:28:35 crc kubenswrapper[5119]: I0220 00:28:35.279818 5119 scope.go:117] "RemoveContainer" containerID="15ae00ee047d1d2da4daa48c50d0f04db0402b5947e6228eb79aa74c73f808bb" Feb 20 00:28:42 crc kubenswrapper[5119]: I0220 00:28:42.161055 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:28:42 crc kubenswrapper[5119]: I0220 00:28:42.161474 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.086154 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9sn9/must-gather-2bn94"] Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.088831 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="68f070b6-1c0e-49b5-bbc8-938761ce678c" containerName="oc" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.088881 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f070b6-1c0e-49b5-bbc8-938761ce678c" containerName="oc" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.089338 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="68f070b6-1c0e-49b5-bbc8-938761ce678c" containerName="oc" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.093370 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9sn9/must-gather-2bn94" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.095261 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-z9sn9\"/\"default-dockercfg-5l5wr\"" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.095258 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-z9sn9\"/\"openshift-service-ca.crt\"" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.095888 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-z9sn9\"/\"kube-root-ca.crt\"" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.100655 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z9sn9/must-gather-2bn94"] Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.137457 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2331f12a-a661-4605-9947-fef583189905-must-gather-output\") pod \"must-gather-2bn94\" (UID: \"2331f12a-a661-4605-9947-fef583189905\") " pod="openshift-must-gather-z9sn9/must-gather-2bn94" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.137808 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfc5s\" (UniqueName: \"kubernetes.io/projected/2331f12a-a661-4605-9947-fef583189905-kube-api-access-mfc5s\") pod \"must-gather-2bn94\" (UID: \"2331f12a-a661-4605-9947-fef583189905\") " pod="openshift-must-gather-z9sn9/must-gather-2bn94" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.239508 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mfc5s\" (UniqueName: \"kubernetes.io/projected/2331f12a-a661-4605-9947-fef583189905-kube-api-access-mfc5s\") pod \"must-gather-2bn94\" (UID: \"2331f12a-a661-4605-9947-fef583189905\") " pod="openshift-must-gather-z9sn9/must-gather-2bn94" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.239670 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2331f12a-a661-4605-9947-fef583189905-must-gather-output\") pod \"must-gather-2bn94\" (UID: \"2331f12a-a661-4605-9947-fef583189905\") " pod="openshift-must-gather-z9sn9/must-gather-2bn94" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.240158 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2331f12a-a661-4605-9947-fef583189905-must-gather-output\") pod \"must-gather-2bn94\" (UID: \"2331f12a-a661-4605-9947-fef583189905\") " pod="openshift-must-gather-z9sn9/must-gather-2bn94" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.261525 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfc5s\" (UniqueName: \"kubernetes.io/projected/2331f12a-a661-4605-9947-fef583189905-kube-api-access-mfc5s\") pod \"must-gather-2bn94\" (UID: \"2331f12a-a661-4605-9947-fef583189905\") " pod="openshift-must-gather-z9sn9/must-gather-2bn94" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.414393 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9sn9/must-gather-2bn94" Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.678867 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z9sn9/must-gather-2bn94"] Feb 20 00:28:52 crc kubenswrapper[5119]: W0220 00:28:52.685893 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2331f12a_a661_4605_9947_fef583189905.slice/crio-fd3ae2623f876b82465912f01063a1bec02c61ed6e0fb810e6ccbbc2ec0cb18d WatchSource:0}: Error finding container fd3ae2623f876b82465912f01063a1bec02c61ed6e0fb810e6ccbbc2ec0cb18d: Status 404 returned error can't find the container with id fd3ae2623f876b82465912f01063a1bec02c61ed6e0fb810e6ccbbc2ec0cb18d Feb 20 00:28:52 crc kubenswrapper[5119]: I0220 00:28:52.974612 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9sn9/must-gather-2bn94" event={"ID":"2331f12a-a661-4605-9947-fef583189905","Type":"ContainerStarted","Data":"fd3ae2623f876b82465912f01063a1bec02c61ed6e0fb810e6ccbbc2ec0cb18d"} Feb 20 00:28:59 crc kubenswrapper[5119]: I0220 00:28:59.024107 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9sn9/must-gather-2bn94" event={"ID":"2331f12a-a661-4605-9947-fef583189905","Type":"ContainerStarted","Data":"96051c21d356b7382d7f8d64604d8cee8413812c8e531980eac58dc0d94b7c15"} Feb 20 00:28:59 crc kubenswrapper[5119]: I0220 00:28:59.024727 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9sn9/must-gather-2bn94" event={"ID":"2331f12a-a661-4605-9947-fef583189905","Type":"ContainerStarted","Data":"cb397ad4d2c49c2160da85516c91c9780220f3e6ec1fab1a3704e039469f323b"} Feb 20 00:29:12 crc kubenswrapper[5119]: I0220 00:29:12.160519 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:29:12 crc kubenswrapper[5119]: I0220 00:29:12.160911 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:29:42 crc kubenswrapper[5119]: I0220 00:29:42.161313 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:29:42 crc kubenswrapper[5119]: I0220 00:29:42.161928 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:29:42 crc kubenswrapper[5119]: I0220 00:29:42.161981 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:29:42 crc kubenswrapper[5119]: I0220 00:29:42.162637 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a542021eb09857f8ffb4ca1336b877f58653295632332fbc3653b725007eaa36"} pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:29:42 crc kubenswrapper[5119]: I0220 00:29:42.162709 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" containerID="cri-o://a542021eb09857f8ffb4ca1336b877f58653295632332fbc3653b725007eaa36" gracePeriod=600 Feb 20 00:29:42 crc kubenswrapper[5119]: I0220 00:29:42.468068 5119 generic.go:358] "Generic (PLEG): container finished" podID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerID="a542021eb09857f8ffb4ca1336b877f58653295632332fbc3653b725007eaa36" exitCode=0 Feb 20 00:29:42 crc kubenswrapper[5119]: I0220 00:29:42.468319 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerDied","Data":"a542021eb09857f8ffb4ca1336b877f58653295632332fbc3653b725007eaa36"} Feb 20 00:29:42 crc kubenswrapper[5119]: I0220 00:29:42.468755 5119 scope.go:117] "RemoveContainer" containerID="89838faa3e23ccc0655e0096613e091fc8decdd475bfdc257b396ab6343fa8f7" Feb 20 00:29:43 crc kubenswrapper[5119]: I0220 00:29:43.477072 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerStarted","Data":"eee996522fc4e847dec015a02c0a8dc42ceadd46ce75e3827be349ce7fa527e3"} Feb 20 00:29:43 crc kubenswrapper[5119]: I0220 00:29:43.502879 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z9sn9/must-gather-2bn94" podStartSLOduration=46.177324389 podStartE2EDuration="51.502860815s" podCreationTimestamp="2026-02-20 00:28:52 +0000 UTC" firstStartedPulling="2026-02-20 00:28:52.689929612 +0000 UTC m=+1114.668893904" lastFinishedPulling="2026-02-20 00:28:58.015466038 +0000 UTC m=+1119.994430330" observedRunningTime="2026-02-20 00:28:59.045275119 +0000 UTC m=+1121.024239441" watchObservedRunningTime="2026-02-20 00:29:43.502860815 +0000 UTC m=+1165.481825117" Feb 20 00:29:48 crc kubenswrapper[5119]: I0220 00:29:48.262761 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-k6wjq_54acf8be-ab9f-4e85-8394-dfafbf121b67/control-plane-machine-set-operator/0.log" Feb 20 00:29:48 crc kubenswrapper[5119]: I0220 00:29:48.334831 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-psrg4_e309235a-31b7-4789-9ba3-5839cab177a6/kube-rbac-proxy/0.log" Feb 20 00:29:48 crc kubenswrapper[5119]: I0220 00:29:48.416195 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-psrg4_e309235a-31b7-4789-9ba3-5839cab177a6/machine-api-operator/0.log" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.150957 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525790-pl29g"] Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.211736 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww"] Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.213376 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525790-pl29g" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.215857 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.216090 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-nmc85\"" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.216095 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.218584 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525790-pl29g"] Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.218778 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.220328 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.220619 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.224189 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww"] Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.360992 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c555fade-0657-492d-b1d0-a7fe3b1db9cc-config-volume\") pod \"collect-profiles-29525790-z99ww\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.361390 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgnvl\" (UniqueName: \"kubernetes.io/projected/c555fade-0657-492d-b1d0-a7fe3b1db9cc-kube-api-access-zgnvl\") pod \"collect-profiles-29525790-z99ww\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.361682 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c555fade-0657-492d-b1d0-a7fe3b1db9cc-secret-volume\") pod \"collect-profiles-29525790-z99ww\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.361879 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp8ll\" (UniqueName: \"kubernetes.io/projected/805d70ec-ae2b-4bc3-a83c-d4d1c478516c-kube-api-access-mp8ll\") pod \"auto-csr-approver-29525790-pl29g\" (UID: \"805d70ec-ae2b-4bc3-a83c-d4d1c478516c\") " pod="openshift-infra/auto-csr-approver-29525790-pl29g" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.463787 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c555fade-0657-492d-b1d0-a7fe3b1db9cc-config-volume\") pod \"collect-profiles-29525790-z99ww\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.463870 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zgnvl\" (UniqueName: \"kubernetes.io/projected/c555fade-0657-492d-b1d0-a7fe3b1db9cc-kube-api-access-zgnvl\") pod \"collect-profiles-29525790-z99ww\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.463987 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c555fade-0657-492d-b1d0-a7fe3b1db9cc-secret-volume\") pod \"collect-profiles-29525790-z99ww\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.464023 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mp8ll\" (UniqueName: \"kubernetes.io/projected/805d70ec-ae2b-4bc3-a83c-d4d1c478516c-kube-api-access-mp8ll\") pod \"auto-csr-approver-29525790-pl29g\" (UID: \"805d70ec-ae2b-4bc3-a83c-d4d1c478516c\") " pod="openshift-infra/auto-csr-approver-29525790-pl29g" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.465139 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c555fade-0657-492d-b1d0-a7fe3b1db9cc-config-volume\") pod \"collect-profiles-29525790-z99ww\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.473854 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c555fade-0657-492d-b1d0-a7fe3b1db9cc-secret-volume\") pod \"collect-profiles-29525790-z99ww\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.481687 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp8ll\" (UniqueName: \"kubernetes.io/projected/805d70ec-ae2b-4bc3-a83c-d4d1c478516c-kube-api-access-mp8ll\") pod \"auto-csr-approver-29525790-pl29g\" (UID: \"805d70ec-ae2b-4bc3-a83c-d4d1c478516c\") " pod="openshift-infra/auto-csr-approver-29525790-pl29g" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.498356 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgnvl\" (UniqueName: \"kubernetes.io/projected/c555fade-0657-492d-b1d0-a7fe3b1db9cc-kube-api-access-zgnvl\") pod \"collect-profiles-29525790-z99ww\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.538013 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525790-pl29g" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.550198 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.793915 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww"] Feb 20 00:30:00 crc kubenswrapper[5119]: I0220 00:30:00.837593 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525790-pl29g"] Feb 20 00:30:00 crc kubenswrapper[5119]: W0220 00:30:00.845715 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod805d70ec_ae2b_4bc3_a83c_d4d1c478516c.slice/crio-4b9dc0006203d5d49981b62a285e207b70abafb361b54e133ae166e95ff8cd2e WatchSource:0}: Error finding container 4b9dc0006203d5d49981b62a285e207b70abafb361b54e133ae166e95ff8cd2e: Status 404 returned error can't find the container with id 4b9dc0006203d5d49981b62a285e207b70abafb361b54e133ae166e95ff8cd2e Feb 20 00:30:01 crc kubenswrapper[5119]: I0220 00:30:01.625723 5119 generic.go:358] "Generic (PLEG): container finished" podID="c555fade-0657-492d-b1d0-a7fe3b1db9cc" containerID="8882e6d8e6ef2540a0c6e2650f9a22ece4da1d0669a080d7dab3d94cb043dd5f" exitCode=0 Feb 20 00:30:01 crc kubenswrapper[5119]: I0220 00:30:01.625774 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" event={"ID":"c555fade-0657-492d-b1d0-a7fe3b1db9cc","Type":"ContainerDied","Data":"8882e6d8e6ef2540a0c6e2650f9a22ece4da1d0669a080d7dab3d94cb043dd5f"} Feb 20 00:30:01 crc kubenswrapper[5119]: I0220 00:30:01.626212 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" event={"ID":"c555fade-0657-492d-b1d0-a7fe3b1db9cc","Type":"ContainerStarted","Data":"457debedcd2b322a276cffac8d31d8e4071a33496a4d07d7f0dd885ccaed64af"} Feb 20 00:30:01 crc kubenswrapper[5119]: I0220 00:30:01.627428 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525790-pl29g" event={"ID":"805d70ec-ae2b-4bc3-a83c-d4d1c478516c","Type":"ContainerStarted","Data":"4b9dc0006203d5d49981b62a285e207b70abafb361b54e133ae166e95ff8cd2e"} Feb 20 00:30:02 crc kubenswrapper[5119]: I0220 00:30:02.635121 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525790-pl29g" event={"ID":"805d70ec-ae2b-4bc3-a83c-d4d1c478516c","Type":"ContainerStarted","Data":"fa667a14f2d39ac2d62d5bb0cee9d0256178224c20725667f4d3b3ed2552a48e"} Feb 20 00:30:02 crc kubenswrapper[5119]: I0220 00:30:02.649759 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29525790-pl29g" podStartSLOduration=1.481902265 podStartE2EDuration="2.649736525s" podCreationTimestamp="2026-02-20 00:30:00 +0000 UTC" firstStartedPulling="2026-02-20 00:30:00.84845466 +0000 UTC m=+1182.827418942" lastFinishedPulling="2026-02-20 00:30:02.0162889 +0000 UTC m=+1183.995253202" observedRunningTime="2026-02-20 00:30:02.648652797 +0000 UTC m=+1184.627617099" watchObservedRunningTime="2026-02-20 00:30:02.649736525 +0000 UTC m=+1184.628700817" Feb 20 00:30:02 crc kubenswrapper[5119]: I0220 00:30:02.891049 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:02 crc kubenswrapper[5119]: I0220 00:30:02.980757 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-lcgw2_ea1d85dc-0434-4191-b4dc-be938aab1430/cert-manager-controller/0.log" Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.002856 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c555fade-0657-492d-b1d0-a7fe3b1db9cc-config-volume\") pod \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.002905 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgnvl\" (UniqueName: \"kubernetes.io/projected/c555fade-0657-492d-b1d0-a7fe3b1db9cc-kube-api-access-zgnvl\") pod \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.002937 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c555fade-0657-492d-b1d0-a7fe3b1db9cc-secret-volume\") pod \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\" (UID: \"c555fade-0657-492d-b1d0-a7fe3b1db9cc\") " Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.003749 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c555fade-0657-492d-b1d0-a7fe3b1db9cc-config-volume" (OuterVolumeSpecName: "config-volume") pod "c555fade-0657-492d-b1d0-a7fe3b1db9cc" (UID: "c555fade-0657-492d-b1d0-a7fe3b1db9cc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.010705 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c555fade-0657-492d-b1d0-a7fe3b1db9cc-kube-api-access-zgnvl" (OuterVolumeSpecName: "kube-api-access-zgnvl") pod "c555fade-0657-492d-b1d0-a7fe3b1db9cc" (UID: "c555fade-0657-492d-b1d0-a7fe3b1db9cc"). InnerVolumeSpecName "kube-api-access-zgnvl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.022182 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c555fade-0657-492d-b1d0-a7fe3b1db9cc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c555fade-0657-492d-b1d0-a7fe3b1db9cc" (UID: "c555fade-0657-492d-b1d0-a7fe3b1db9cc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.105114 5119 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c555fade-0657-492d-b1d0-a7fe3b1db9cc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.105388 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zgnvl\" (UniqueName: \"kubernetes.io/projected/c555fade-0657-492d-b1d0-a7fe3b1db9cc-kube-api-access-zgnvl\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.105398 5119 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c555fade-0657-492d-b1d0-a7fe3b1db9cc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.130297 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-2ngdf_06be9c2b-063d-431f-9500-06d071455834/cert-manager-cainjector/0.log" Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.185443 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-hvxjc_31cacb6a-1bdb-47a3-a04b-e86aa109f295/cert-manager-webhook/0.log" Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.644654 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.644671 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-z99ww" event={"ID":"c555fade-0657-492d-b1d0-a7fe3b1db9cc","Type":"ContainerDied","Data":"457debedcd2b322a276cffac8d31d8e4071a33496a4d07d7f0dd885ccaed64af"} Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.644718 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="457debedcd2b322a276cffac8d31d8e4071a33496a4d07d7f0dd885ccaed64af" Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.646243 5119 generic.go:358] "Generic (PLEG): container finished" podID="805d70ec-ae2b-4bc3-a83c-d4d1c478516c" containerID="fa667a14f2d39ac2d62d5bb0cee9d0256178224c20725667f4d3b3ed2552a48e" exitCode=0 Feb 20 00:30:03 crc kubenswrapper[5119]: I0220 00:30:03.646438 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525790-pl29g" event={"ID":"805d70ec-ae2b-4bc3-a83c-d4d1c478516c","Type":"ContainerDied","Data":"fa667a14f2d39ac2d62d5bb0cee9d0256178224c20725667f4d3b3ed2552a48e"} Feb 20 00:30:04 crc kubenswrapper[5119]: I0220 00:30:04.992329 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525790-pl29g" Feb 20 00:30:05 crc kubenswrapper[5119]: I0220 00:30:05.134274 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp8ll\" (UniqueName: \"kubernetes.io/projected/805d70ec-ae2b-4bc3-a83c-d4d1c478516c-kube-api-access-mp8ll\") pod \"805d70ec-ae2b-4bc3-a83c-d4d1c478516c\" (UID: \"805d70ec-ae2b-4bc3-a83c-d4d1c478516c\") " Feb 20 00:30:05 crc kubenswrapper[5119]: I0220 00:30:05.149533 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805d70ec-ae2b-4bc3-a83c-d4d1c478516c-kube-api-access-mp8ll" (OuterVolumeSpecName: "kube-api-access-mp8ll") pod "805d70ec-ae2b-4bc3-a83c-d4d1c478516c" (UID: "805d70ec-ae2b-4bc3-a83c-d4d1c478516c"). InnerVolumeSpecName "kube-api-access-mp8ll". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:30:05 crc kubenswrapper[5119]: I0220 00:30:05.236059 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mp8ll\" (UniqueName: \"kubernetes.io/projected/805d70ec-ae2b-4bc3-a83c-d4d1c478516c-kube-api-access-mp8ll\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:05 crc kubenswrapper[5119]: I0220 00:30:05.662576 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525790-pl29g" event={"ID":"805d70ec-ae2b-4bc3-a83c-d4d1c478516c","Type":"ContainerDied","Data":"4b9dc0006203d5d49981b62a285e207b70abafb361b54e133ae166e95ff8cd2e"} Feb 20 00:30:05 crc kubenswrapper[5119]: I0220 00:30:05.662614 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525790-pl29g" Feb 20 00:30:05 crc kubenswrapper[5119]: I0220 00:30:05.662625 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b9dc0006203d5d49981b62a285e207b70abafb361b54e133ae166e95ff8cd2e" Feb 20 00:30:05 crc kubenswrapper[5119]: I0220 00:30:05.727369 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525784-nmmdc"] Feb 20 00:30:05 crc kubenswrapper[5119]: I0220 00:30:05.731429 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525784-nmmdc"] Feb 20 00:30:06 crc kubenswrapper[5119]: I0220 00:30:06.865064 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="296ec321-10e7-4337-aa40-f6132860ee9e" path="/var/lib/kubelet/pods/296ec321-10e7-4337-aa40-f6132860ee9e/volumes" Feb 20 00:30:18 crc kubenswrapper[5119]: I0220 00:30:18.779485 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-7cddz_87ccd0bd-19cf-436b-975a-01bb63ec761f/prometheus-operator/0.log" Feb 20 00:30:18 crc kubenswrapper[5119]: I0220 00:30:18.896202 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx_47019409-3e65-4a88-89cd-220570c1dea3/prometheus-operator-admission-webhook/0.log" Feb 20 00:30:18 crc kubenswrapper[5119]: I0220 00:30:18.941905 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr_c7f3c09e-f40f-4517-81b4-9be7d5a4922c/prometheus-operator-admission-webhook/0.log" Feb 20 00:30:19 crc kubenswrapper[5119]: I0220 00:30:19.057215 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-ldlj9_cf6fac41-5b87-46d8-bc02-310e87d1b79c/operator/0.log" Feb 20 00:30:19 crc kubenswrapper[5119]: I0220 00:30:19.127462 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-bzhfk_1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e/perses-operator/0.log" Feb 20 00:30:19 crc kubenswrapper[5119]: I0220 00:30:19.390982 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlzxr_e24ea4b0-1a34-4fb3-b40c-684c03795e07/kube-multus/0.log" Feb 20 00:30:19 crc kubenswrapper[5119]: I0220 00:30:19.400092 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rlzxr_e24ea4b0-1a34-4fb3-b40c-684c03795e07/kube-multus/0.log" Feb 20 00:30:19 crc kubenswrapper[5119]: I0220 00:30:19.402680 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:30:19 crc kubenswrapper[5119]: I0220 00:30:19.410889 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:30:35 crc kubenswrapper[5119]: I0220 00:30:35.263338 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh_d2ed2092-f16b-465e-b93a-f7c4dd8368e0/util/0.log" Feb 20 00:30:35 crc kubenswrapper[5119]: I0220 00:30:35.462978 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh_d2ed2092-f16b-465e-b93a-f7c4dd8368e0/util/0.log" Feb 20 00:30:35 crc kubenswrapper[5119]: I0220 00:30:35.469613 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh_d2ed2092-f16b-465e-b93a-f7c4dd8368e0/pull/0.log" Feb 20 00:30:35 crc kubenswrapper[5119]: I0220 00:30:35.474204 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh_d2ed2092-f16b-465e-b93a-f7c4dd8368e0/pull/0.log" Feb 20 00:30:35 crc kubenswrapper[5119]: I0220 00:30:35.487084 5119 scope.go:117] "RemoveContainer" containerID="f1509f8373951996f6a4e8a67ad71989aa3905c0adae8a4c142b69acf39b522f" Feb 20 00:30:35 crc kubenswrapper[5119]: I0220 00:30:35.634356 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh_d2ed2092-f16b-465e-b93a-f7c4dd8368e0/util/0.log" Feb 20 00:30:35 crc kubenswrapper[5119]: I0220 00:30:35.680359 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh_d2ed2092-f16b-465e-b93a-f7c4dd8368e0/pull/0.log" Feb 20 00:30:35 crc kubenswrapper[5119]: I0220 00:30:35.700654 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12ntrh_d2ed2092-f16b-465e-b93a-f7c4dd8368e0/extract/0.log" Feb 20 00:30:35 crc kubenswrapper[5119]: I0220 00:30:35.808811 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb_03f522f3-35e9-4534-a728-bc225f746dda/util/0.log" Feb 20 00:30:35 crc kubenswrapper[5119]: I0220 00:30:35.955270 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb_03f522f3-35e9-4534-a728-bc225f746dda/pull/0.log" Feb 20 00:30:35 crc kubenswrapper[5119]: I0220 00:30:35.973005 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb_03f522f3-35e9-4534-a728-bc225f746dda/util/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.024864 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb_03f522f3-35e9-4534-a728-bc225f746dda/pull/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.177838 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb_03f522f3-35e9-4534-a728-bc225f746dda/extract/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.262333 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb_03f522f3-35e9-4534-a728-bc225f746dda/util/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.265791 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f8nprb_03f522f3-35e9-4534-a728-bc225f746dda/pull/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.348850 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6_77de016a-acf4-43e7-a390-e30a3d712904/util/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.486013 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6_77de016a-acf4-43e7-a390-e30a3d712904/util/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.538906 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6_77de016a-acf4-43e7-a390-e30a3d712904/pull/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.573316 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6_77de016a-acf4-43e7-a390-e30a3d712904/pull/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.763419 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6_77de016a-acf4-43e7-a390-e30a3d712904/util/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.772573 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6_77de016a-acf4-43e7-a390-e30a3d712904/extract/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.777313 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5nt8b6_77de016a-acf4-43e7-a390-e30a3d712904/pull/0.log" Feb 20 00:30:36 crc kubenswrapper[5119]: I0220 00:30:36.949065 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7_27c1b674-a630-4652-8c16-55724136f7d8/util/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.086284 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7_27c1b674-a630-4652-8c16-55724136f7d8/pull/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.115964 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7_27c1b674-a630-4652-8c16-55724136f7d8/util/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.123902 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7_27c1b674-a630-4652-8c16-55724136f7d8/pull/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.267616 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7_27c1b674-a630-4652-8c16-55724136f7d8/util/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.292581 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7_27c1b674-a630-4652-8c16-55724136f7d8/pull/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.294533 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08tmmj7_27c1b674-a630-4652-8c16-55724136f7d8/extract/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.489855 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hqvdk_807ffa59-982e-453f-ba96-3b25858a4b20/extract-utilities/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.683057 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hqvdk_807ffa59-982e-453f-ba96-3b25858a4b20/extract-content/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.713556 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hqvdk_807ffa59-982e-453f-ba96-3b25858a4b20/extract-content/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.730372 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hqvdk_807ffa59-982e-453f-ba96-3b25858a4b20/extract-utilities/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.928949 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hqvdk_807ffa59-982e-453f-ba96-3b25858a4b20/extract-utilities/0.log" Feb 20 00:30:37 crc kubenswrapper[5119]: I0220 00:30:37.974501 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hqvdk_807ffa59-982e-453f-ba96-3b25858a4b20/extract-content/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.077759 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nc7c2_59b63f17-ee7d-48fc-960d-195081b022c7/extract-utilities/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.083404 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hqvdk_807ffa59-982e-453f-ba96-3b25858a4b20/registry-server/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.199388 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nc7c2_59b63f17-ee7d-48fc-960d-195081b022c7/extract-utilities/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.201189 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nc7c2_59b63f17-ee7d-48fc-960d-195081b022c7/extract-content/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.218072 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nc7c2_59b63f17-ee7d-48fc-960d-195081b022c7/extract-content/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.363128 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nc7c2_59b63f17-ee7d-48fc-960d-195081b022c7/extract-utilities/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.365076 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nc7c2_59b63f17-ee7d-48fc-960d-195081b022c7/extract-content/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.422043 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-htf8j_b64adc58-6ed6-41c9-95bd-535f34890377/marketplace-operator/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.572932 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nc7c2_59b63f17-ee7d-48fc-960d-195081b022c7/registry-server/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.588605 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5nb2n_11050770-13c5-418e-b5fc-cc1bec3dc51e/extract-utilities/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.755969 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5nb2n_11050770-13c5-418e-b5fc-cc1bec3dc51e/extract-utilities/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.783051 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5nb2n_11050770-13c5-418e-b5fc-cc1bec3dc51e/extract-content/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.789174 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5nb2n_11050770-13c5-418e-b5fc-cc1bec3dc51e/extract-content/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.928221 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5nb2n_11050770-13c5-418e-b5fc-cc1bec3dc51e/extract-utilities/0.log" Feb 20 00:30:38 crc kubenswrapper[5119]: I0220 00:30:38.952443 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5nb2n_11050770-13c5-418e-b5fc-cc1bec3dc51e/extract-content/0.log" Feb 20 00:30:39 crc kubenswrapper[5119]: I0220 00:30:39.177976 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5nb2n_11050770-13c5-418e-b5fc-cc1bec3dc51e/registry-server/0.log" Feb 20 00:30:53 crc kubenswrapper[5119]: I0220 00:30:53.090881 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-55d8ffb48f-6cmxx_47019409-3e65-4a88-89cd-220570c1dea3/prometheus-operator-admission-webhook/0.log" Feb 20 00:30:53 crc kubenswrapper[5119]: I0220 00:30:53.091242 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-7cddz_87ccd0bd-19cf-436b-975a-01bb63ec761f/prometheus-operator/0.log" Feb 20 00:30:53 crc kubenswrapper[5119]: I0220 00:30:53.204742 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-ldlj9_cf6fac41-5b87-46d8-bc02-310e87d1b79c/operator/0.log" Feb 20 00:30:53 crc kubenswrapper[5119]: I0220 00:30:53.205924 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-55d8ffb48f-m9rzr_c7f3c09e-f40f-4517-81b4-9be7d5a4922c/prometheus-operator-admission-webhook/0.log" Feb 20 00:30:53 crc kubenswrapper[5119]: I0220 00:30:53.248050 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-bzhfk_1d0cbbf1-3ab7-4deb-8dbc-961ef0ed7c7e/perses-operator/0.log" Feb 20 00:31:32 crc kubenswrapper[5119]: I0220 00:31:32.454526 5119 generic.go:358] "Generic (PLEG): container finished" podID="2331f12a-a661-4605-9947-fef583189905" containerID="cb397ad4d2c49c2160da85516c91c9780220f3e6ec1fab1a3704e039469f323b" exitCode=0 Feb 20 00:31:32 crc kubenswrapper[5119]: I0220 00:31:32.454608 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9sn9/must-gather-2bn94" event={"ID":"2331f12a-a661-4605-9947-fef583189905","Type":"ContainerDied","Data":"cb397ad4d2c49c2160da85516c91c9780220f3e6ec1fab1a3704e039469f323b"} Feb 20 00:31:32 crc kubenswrapper[5119]: I0220 00:31:32.455774 5119 scope.go:117] "RemoveContainer" containerID="cb397ad4d2c49c2160da85516c91c9780220f3e6ec1fab1a3704e039469f323b" Feb 20 00:31:32 crc kubenswrapper[5119]: I0220 00:31:32.654618 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9sn9_must-gather-2bn94_2331f12a-a661-4605-9947-fef583189905/gather/0.log" Feb 20 00:31:38 crc kubenswrapper[5119]: I0220 00:31:38.879309 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9sn9/must-gather-2bn94"] Feb 20 00:31:38 crc kubenswrapper[5119]: I0220 00:31:38.880790 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-z9sn9/must-gather-2bn94" podUID="2331f12a-a661-4605-9947-fef583189905" containerName="copy" containerID="cri-o://96051c21d356b7382d7f8d64604d8cee8413812c8e531980eac58dc0d94b7c15" gracePeriod=2 Feb 20 00:31:38 crc kubenswrapper[5119]: I0220 00:31:38.882571 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9sn9/must-gather-2bn94"] Feb 20 00:31:38 crc kubenswrapper[5119]: I0220 00:31:38.882967 5119 status_manager.go:895] "Failed to get status for pod" podUID="2331f12a-a661-4605-9947-fef583189905" pod="openshift-must-gather-z9sn9/must-gather-2bn94" err="pods \"must-gather-2bn94\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-z9sn9\": no relationship found between node 'crc' and this object" Feb 20 00:31:39 crc kubenswrapper[5119]: I0220 00:31:39.518321 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9sn9_must-gather-2bn94_2331f12a-a661-4605-9947-fef583189905/copy/0.log" Feb 20 00:31:39 crc kubenswrapper[5119]: I0220 00:31:39.519508 5119 generic.go:358] "Generic (PLEG): container finished" podID="2331f12a-a661-4605-9947-fef583189905" containerID="96051c21d356b7382d7f8d64604d8cee8413812c8e531980eac58dc0d94b7c15" exitCode=143 Feb 20 00:31:39 crc kubenswrapper[5119]: I0220 00:31:39.818013 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9sn9_must-gather-2bn94_2331f12a-a661-4605-9947-fef583189905/copy/0.log" Feb 20 00:31:39 crc kubenswrapper[5119]: I0220 00:31:39.818400 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9sn9/must-gather-2bn94" Feb 20 00:31:39 crc kubenswrapper[5119]: I0220 00:31:39.819841 5119 status_manager.go:895] "Failed to get status for pod" podUID="2331f12a-a661-4605-9947-fef583189905" pod="openshift-must-gather-z9sn9/must-gather-2bn94" err="pods \"must-gather-2bn94\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-z9sn9\": no relationship found between node 'crc' and this object" Feb 20 00:31:39 crc kubenswrapper[5119]: I0220 00:31:39.924253 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2331f12a-a661-4605-9947-fef583189905-must-gather-output\") pod \"2331f12a-a661-4605-9947-fef583189905\" (UID: \"2331f12a-a661-4605-9947-fef583189905\") " Feb 20 00:31:39 crc kubenswrapper[5119]: I0220 00:31:39.924454 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfc5s\" (UniqueName: \"kubernetes.io/projected/2331f12a-a661-4605-9947-fef583189905-kube-api-access-mfc5s\") pod \"2331f12a-a661-4605-9947-fef583189905\" (UID: \"2331f12a-a661-4605-9947-fef583189905\") " Feb 20 00:31:39 crc kubenswrapper[5119]: I0220 00:31:39.932696 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2331f12a-a661-4605-9947-fef583189905-kube-api-access-mfc5s" (OuterVolumeSpecName: "kube-api-access-mfc5s") pod "2331f12a-a661-4605-9947-fef583189905" (UID: "2331f12a-a661-4605-9947-fef583189905"). InnerVolumeSpecName "kube-api-access-mfc5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:31:39 crc kubenswrapper[5119]: I0220 00:31:39.969516 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2331f12a-a661-4605-9947-fef583189905-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2331f12a-a661-4605-9947-fef583189905" (UID: "2331f12a-a661-4605-9947-fef583189905"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:31:40 crc kubenswrapper[5119]: I0220 00:31:40.025938 5119 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2331f12a-a661-4605-9947-fef583189905-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:40 crc kubenswrapper[5119]: I0220 00:31:40.025970 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfc5s\" (UniqueName: \"kubernetes.io/projected/2331f12a-a661-4605-9947-fef583189905-kube-api-access-mfc5s\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:40 crc kubenswrapper[5119]: I0220 00:31:40.529442 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9sn9_must-gather-2bn94_2331f12a-a661-4605-9947-fef583189905/copy/0.log" Feb 20 00:31:40 crc kubenswrapper[5119]: I0220 00:31:40.530274 5119 scope.go:117] "RemoveContainer" containerID="96051c21d356b7382d7f8d64604d8cee8413812c8e531980eac58dc0d94b7c15" Feb 20 00:31:40 crc kubenswrapper[5119]: I0220 00:31:40.530459 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9sn9/must-gather-2bn94" Feb 20 00:31:40 crc kubenswrapper[5119]: I0220 00:31:40.534672 5119 status_manager.go:895] "Failed to get status for pod" podUID="2331f12a-a661-4605-9947-fef583189905" pod="openshift-must-gather-z9sn9/must-gather-2bn94" err="pods \"must-gather-2bn94\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-z9sn9\": no relationship found between node 'crc' and this object" Feb 20 00:31:40 crc kubenswrapper[5119]: I0220 00:31:40.558082 5119 status_manager.go:895] "Failed to get status for pod" podUID="2331f12a-a661-4605-9947-fef583189905" pod="openshift-must-gather-z9sn9/must-gather-2bn94" err="pods \"must-gather-2bn94\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-z9sn9\": no relationship found between node 'crc' and this object" Feb 20 00:31:40 crc kubenswrapper[5119]: I0220 00:31:40.559516 5119 scope.go:117] "RemoveContainer" containerID="cb397ad4d2c49c2160da85516c91c9780220f3e6ec1fab1a3704e039469f323b" Feb 20 00:31:40 crc kubenswrapper[5119]: I0220 00:31:40.871407 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2331f12a-a661-4605-9947-fef583189905" path="/var/lib/kubelet/pods/2331f12a-a661-4605-9947-fef583189905/volumes" Feb 20 00:31:42 crc kubenswrapper[5119]: I0220 00:31:42.160794 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:31:42 crc kubenswrapper[5119]: I0220 00:31:42.161214 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.151817 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525792-q54qs"] Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153339 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c555fade-0657-492d-b1d0-a7fe3b1db9cc" containerName="collect-profiles" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153358 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c555fade-0657-492d-b1d0-a7fe3b1db9cc" containerName="collect-profiles" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153378 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="805d70ec-ae2b-4bc3-a83c-d4d1c478516c" containerName="oc" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153386 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="805d70ec-ae2b-4bc3-a83c-d4d1c478516c" containerName="oc" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153411 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2331f12a-a661-4605-9947-fef583189905" containerName="copy" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153419 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="2331f12a-a661-4605-9947-fef583189905" containerName="copy" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153459 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2331f12a-a661-4605-9947-fef583189905" containerName="gather" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153466 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="2331f12a-a661-4605-9947-fef583189905" containerName="gather" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153707 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="2331f12a-a661-4605-9947-fef583189905" containerName="copy" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153727 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="2331f12a-a661-4605-9947-fef583189905" containerName="gather" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153743 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="c555fade-0657-492d-b1d0-a7fe3b1db9cc" containerName="collect-profiles" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.153750 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="805d70ec-ae2b-4bc3-a83c-d4d1c478516c" containerName="oc" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.159748 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525792-q54qs" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.162783 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.162806 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525792-q54qs"] Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.163309 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.163404 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-nmc85\"" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.290437 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6s9p\" (UniqueName: \"kubernetes.io/projected/1e6fc5f0-4c3d-4641-8611-7e36001958cc-kube-api-access-g6s9p\") pod \"auto-csr-approver-29525792-q54qs\" (UID: \"1e6fc5f0-4c3d-4641-8611-7e36001958cc\") " pod="openshift-infra/auto-csr-approver-29525792-q54qs" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.392471 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g6s9p\" (UniqueName: \"kubernetes.io/projected/1e6fc5f0-4c3d-4641-8611-7e36001958cc-kube-api-access-g6s9p\") pod \"auto-csr-approver-29525792-q54qs\" (UID: \"1e6fc5f0-4c3d-4641-8611-7e36001958cc\") " pod="openshift-infra/auto-csr-approver-29525792-q54qs" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.430608 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6s9p\" (UniqueName: \"kubernetes.io/projected/1e6fc5f0-4c3d-4641-8611-7e36001958cc-kube-api-access-g6s9p\") pod \"auto-csr-approver-29525792-q54qs\" (UID: \"1e6fc5f0-4c3d-4641-8611-7e36001958cc\") " pod="openshift-infra/auto-csr-approver-29525792-q54qs" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.491395 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525792-q54qs" Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.792856 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525792-q54qs"] Feb 20 00:32:00 crc kubenswrapper[5119]: W0220 00:32:00.796682 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e6fc5f0_4c3d_4641_8611_7e36001958cc.slice/crio-def694f4e64bb1c9c3b5f69a959d5972465883a515e47efecbcdf7074a0ddcd8 WatchSource:0}: Error finding container def694f4e64bb1c9c3b5f69a959d5972465883a515e47efecbcdf7074a0ddcd8: Status 404 returned error can't find the container with id def694f4e64bb1c9c3b5f69a959d5972465883a515e47efecbcdf7074a0ddcd8 Feb 20 00:32:00 crc kubenswrapper[5119]: I0220 00:32:00.797425 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 00:32:01 crc kubenswrapper[5119]: I0220 00:32:01.748281 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525792-q54qs" event={"ID":"1e6fc5f0-4c3d-4641-8611-7e36001958cc","Type":"ContainerStarted","Data":"def694f4e64bb1c9c3b5f69a959d5972465883a515e47efecbcdf7074a0ddcd8"} Feb 20 00:32:02 crc kubenswrapper[5119]: I0220 00:32:02.759626 5119 generic.go:358] "Generic (PLEG): container finished" podID="1e6fc5f0-4c3d-4641-8611-7e36001958cc" containerID="2c42c12878703c280f4ca5ba6c2adef678a2919c794626bb2b00a7485dd4cb59" exitCode=0 Feb 20 00:32:02 crc kubenswrapper[5119]: I0220 00:32:02.759725 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525792-q54qs" event={"ID":"1e6fc5f0-4c3d-4641-8611-7e36001958cc","Type":"ContainerDied","Data":"2c42c12878703c280f4ca5ba6c2adef678a2919c794626bb2b00a7485dd4cb59"} Feb 20 00:32:04 crc kubenswrapper[5119]: I0220 00:32:04.089886 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525792-q54qs" Feb 20 00:32:04 crc kubenswrapper[5119]: I0220 00:32:04.153090 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6s9p\" (UniqueName: \"kubernetes.io/projected/1e6fc5f0-4c3d-4641-8611-7e36001958cc-kube-api-access-g6s9p\") pod \"1e6fc5f0-4c3d-4641-8611-7e36001958cc\" (UID: \"1e6fc5f0-4c3d-4641-8611-7e36001958cc\") " Feb 20 00:32:04 crc kubenswrapper[5119]: I0220 00:32:04.167488 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6fc5f0-4c3d-4641-8611-7e36001958cc-kube-api-access-g6s9p" (OuterVolumeSpecName: "kube-api-access-g6s9p") pod "1e6fc5f0-4c3d-4641-8611-7e36001958cc" (UID: "1e6fc5f0-4c3d-4641-8611-7e36001958cc"). InnerVolumeSpecName "kube-api-access-g6s9p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:32:04 crc kubenswrapper[5119]: I0220 00:32:04.254219 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g6s9p\" (UniqueName: \"kubernetes.io/projected/1e6fc5f0-4c3d-4641-8611-7e36001958cc-kube-api-access-g6s9p\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:04 crc kubenswrapper[5119]: I0220 00:32:04.790516 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525792-q54qs" Feb 20 00:32:04 crc kubenswrapper[5119]: I0220 00:32:04.790515 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525792-q54qs" event={"ID":"1e6fc5f0-4c3d-4641-8611-7e36001958cc","Type":"ContainerDied","Data":"def694f4e64bb1c9c3b5f69a959d5972465883a515e47efecbcdf7074a0ddcd8"} Feb 20 00:32:04 crc kubenswrapper[5119]: I0220 00:32:04.790778 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="def694f4e64bb1c9c3b5f69a959d5972465883a515e47efecbcdf7074a0ddcd8" Feb 20 00:32:05 crc kubenswrapper[5119]: I0220 00:32:05.181404 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525786-wsdlp"] Feb 20 00:32:05 crc kubenswrapper[5119]: I0220 00:32:05.190509 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525786-wsdlp"] Feb 20 00:32:06 crc kubenswrapper[5119]: I0220 00:32:06.870322 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215f2411-f8a7-4a86-a987-c1555486f58b" path="/var/lib/kubelet/pods/215f2411-f8a7-4a86-a987-c1555486f58b/volumes" Feb 20 00:32:12 crc kubenswrapper[5119]: I0220 00:32:12.160750 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:32:12 crc kubenswrapper[5119]: I0220 00:32:12.161462 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.538601 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m92nq"] Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.540874 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1e6fc5f0-4c3d-4641-8611-7e36001958cc" containerName="oc" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.540907 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6fc5f0-4c3d-4641-8611-7e36001958cc" containerName="oc" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.541212 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="1e6fc5f0-4c3d-4641-8611-7e36001958cc" containerName="oc" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.552789 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m92nq"] Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.553304 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.627605 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hjvq\" (UniqueName: \"kubernetes.io/projected/e6131c50-034a-422e-8039-4a39cfcbec6a-kube-api-access-6hjvq\") pod \"redhat-operators-m92nq\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.627724 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-utilities\") pod \"redhat-operators-m92nq\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.627842 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-catalog-content\") pod \"redhat-operators-m92nq\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.712889 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h74cl"] Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.720094 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.729234 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6hjvq\" (UniqueName: \"kubernetes.io/projected/e6131c50-034a-422e-8039-4a39cfcbec6a-kube-api-access-6hjvq\") pod \"redhat-operators-m92nq\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.729303 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-utilities\") pod \"redhat-operators-m92nq\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.729358 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-catalog-content\") pod \"redhat-operators-m92nq\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.729971 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-catalog-content\") pod \"redhat-operators-m92nq\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.730695 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-utilities\") pod \"redhat-operators-m92nq\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.739724 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h74cl"] Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.769965 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hjvq\" (UniqueName: \"kubernetes.io/projected/e6131c50-034a-422e-8039-4a39cfcbec6a-kube-api-access-6hjvq\") pod \"redhat-operators-m92nq\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.831161 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkcn4\" (UniqueName: \"kubernetes.io/projected/184644d6-ae9f-4210-b8e1-56d249890787-kube-api-access-xkcn4\") pod \"certified-operators-h74cl\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.831262 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-utilities\") pod \"certified-operators-h74cl\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.831305 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-catalog-content\") pod \"certified-operators-h74cl\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.892461 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.932821 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xkcn4\" (UniqueName: \"kubernetes.io/projected/184644d6-ae9f-4210-b8e1-56d249890787-kube-api-access-xkcn4\") pod \"certified-operators-h74cl\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.933186 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-utilities\") pod \"certified-operators-h74cl\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.933239 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-catalog-content\") pod \"certified-operators-h74cl\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.936194 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-utilities\") pod \"certified-operators-h74cl\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.936232 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-catalog-content\") pod \"certified-operators-h74cl\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:31 crc kubenswrapper[5119]: I0220 00:32:31.956475 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkcn4\" (UniqueName: \"kubernetes.io/projected/184644d6-ae9f-4210-b8e1-56d249890787-kube-api-access-xkcn4\") pod \"certified-operators-h74cl\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:32 crc kubenswrapper[5119]: I0220 00:32:32.045264 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:32 crc kubenswrapper[5119]: I0220 00:32:32.341311 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m92nq"] Feb 20 00:32:32 crc kubenswrapper[5119]: I0220 00:32:32.500862 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h74cl"] Feb 20 00:32:33 crc kubenswrapper[5119]: I0220 00:32:33.110717 5119 generic.go:358] "Generic (PLEG): container finished" podID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerID="ea17088018d9310207804f340fd1f5c2610ac45b7ac66edc803965a96f4522de" exitCode=0 Feb 20 00:32:33 crc kubenswrapper[5119]: I0220 00:32:33.110797 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92nq" event={"ID":"e6131c50-034a-422e-8039-4a39cfcbec6a","Type":"ContainerDied","Data":"ea17088018d9310207804f340fd1f5c2610ac45b7ac66edc803965a96f4522de"} Feb 20 00:32:33 crc kubenswrapper[5119]: I0220 00:32:33.111157 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92nq" event={"ID":"e6131c50-034a-422e-8039-4a39cfcbec6a","Type":"ContainerStarted","Data":"40880aee542339a4724fb155d4ee9ee48f242eca5301d903f1b3f715f1872acf"} Feb 20 00:32:33 crc kubenswrapper[5119]: I0220 00:32:33.114307 5119 generic.go:358] "Generic (PLEG): container finished" podID="184644d6-ae9f-4210-b8e1-56d249890787" containerID="96b0e27ca97d104ad0ac5d377025383b12362189041276c23aa5d8f29011af71" exitCode=0 Feb 20 00:32:33 crc kubenswrapper[5119]: I0220 00:32:33.114404 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74cl" event={"ID":"184644d6-ae9f-4210-b8e1-56d249890787","Type":"ContainerDied","Data":"96b0e27ca97d104ad0ac5d377025383b12362189041276c23aa5d8f29011af71"} Feb 20 00:32:33 crc kubenswrapper[5119]: I0220 00:32:33.114434 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74cl" event={"ID":"184644d6-ae9f-4210-b8e1-56d249890787","Type":"ContainerStarted","Data":"9c307bebd0bb1a47a6d641e09eb8ac4481433a4e65285acf70efaa3da7f4ec13"} Feb 20 00:32:33 crc kubenswrapper[5119]: I0220 00:32:33.920956 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gkffc"] Feb 20 00:32:33 crc kubenswrapper[5119]: I0220 00:32:33.930977 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:33 crc kubenswrapper[5119]: I0220 00:32:33.958563 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gkffc"] Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.072751 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-utilities\") pod \"community-operators-gkffc\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.072807 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-catalog-content\") pod \"community-operators-gkffc\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.072932 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wmll\" (UniqueName: \"kubernetes.io/projected/216a6b5b-f119-4842-a4bc-a2e09cd99513-kube-api-access-6wmll\") pod \"community-operators-gkffc\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.123242 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92nq" event={"ID":"e6131c50-034a-422e-8039-4a39cfcbec6a","Type":"ContainerStarted","Data":"a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852"} Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.126611 5119 generic.go:358] "Generic (PLEG): container finished" podID="184644d6-ae9f-4210-b8e1-56d249890787" containerID="db90ee71f751621c5a9d7c213acbe8d9d4a400c49a370e78794c1cb1b958b079" exitCode=0 Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.126788 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74cl" event={"ID":"184644d6-ae9f-4210-b8e1-56d249890787","Type":"ContainerDied","Data":"db90ee71f751621c5a9d7c213acbe8d9d4a400c49a370e78794c1cb1b958b079"} Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.174123 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wmll\" (UniqueName: \"kubernetes.io/projected/216a6b5b-f119-4842-a4bc-a2e09cd99513-kube-api-access-6wmll\") pod \"community-operators-gkffc\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.174192 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-utilities\") pod \"community-operators-gkffc\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.174215 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-catalog-content\") pod \"community-operators-gkffc\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.174759 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-catalog-content\") pod \"community-operators-gkffc\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.174882 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-utilities\") pod \"community-operators-gkffc\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.195879 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wmll\" (UniqueName: \"kubernetes.io/projected/216a6b5b-f119-4842-a4bc-a2e09cd99513-kube-api-access-6wmll\") pod \"community-operators-gkffc\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.274203 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:34 crc kubenswrapper[5119]: I0220 00:32:34.770386 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gkffc"] Feb 20 00:32:34 crc kubenswrapper[5119]: W0220 00:32:34.773195 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod216a6b5b_f119_4842_a4bc_a2e09cd99513.slice/crio-9a4dd06a06794b8e18219e2d30760c57eb651a335a865829aba6bbe48250551a WatchSource:0}: Error finding container 9a4dd06a06794b8e18219e2d30760c57eb651a335a865829aba6bbe48250551a: Status 404 returned error can't find the container with id 9a4dd06a06794b8e18219e2d30760c57eb651a335a865829aba6bbe48250551a Feb 20 00:32:35 crc kubenswrapper[5119]: I0220 00:32:35.140120 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74cl" event={"ID":"184644d6-ae9f-4210-b8e1-56d249890787","Type":"ContainerStarted","Data":"c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3"} Feb 20 00:32:35 crc kubenswrapper[5119]: I0220 00:32:35.143146 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkffc" event={"ID":"216a6b5b-f119-4842-a4bc-a2e09cd99513","Type":"ContainerStarted","Data":"9a4dd06a06794b8e18219e2d30760c57eb651a335a865829aba6bbe48250551a"} Feb 20 00:32:35 crc kubenswrapper[5119]: I0220 00:32:35.168288 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h74cl" podStartSLOduration=3.597530809 podStartE2EDuration="4.168269697s" podCreationTimestamp="2026-02-20 00:32:31 +0000 UTC" firstStartedPulling="2026-02-20 00:32:33.115124419 +0000 UTC m=+1335.094088721" lastFinishedPulling="2026-02-20 00:32:33.685863307 +0000 UTC m=+1335.664827609" observedRunningTime="2026-02-20 00:32:35.166663595 +0000 UTC m=+1337.145627927" watchObservedRunningTime="2026-02-20 00:32:35.168269697 +0000 UTC m=+1337.147234009" Feb 20 00:32:35 crc kubenswrapper[5119]: I0220 00:32:35.618710 5119 scope.go:117] "RemoveContainer" containerID="19373faf1b5ed81c275df47dbb55ad0167a2ab409f5db85c3034a9faa7695cd3" Feb 20 00:32:35 crc kubenswrapper[5119]: I0220 00:32:35.639620 5119 scope.go:117] "RemoveContainer" containerID="51f5ce14bd2f457fab5a67d80ef3dcb29306674b34d3db7c456128020a2b03e5" Feb 20 00:32:36 crc kubenswrapper[5119]: I0220 00:32:36.155725 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkffc" event={"ID":"216a6b5b-f119-4842-a4bc-a2e09cd99513","Type":"ContainerStarted","Data":"7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad"} Feb 20 00:32:36 crc kubenswrapper[5119]: I0220 00:32:36.159174 5119 generic.go:358] "Generic (PLEG): container finished" podID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerID="a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852" exitCode=0 Feb 20 00:32:36 crc kubenswrapper[5119]: I0220 00:32:36.159250 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92nq" event={"ID":"e6131c50-034a-422e-8039-4a39cfcbec6a","Type":"ContainerDied","Data":"a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852"} Feb 20 00:32:37 crc kubenswrapper[5119]: I0220 00:32:37.168266 5119 generic.go:358] "Generic (PLEG): container finished" podID="216a6b5b-f119-4842-a4bc-a2e09cd99513" containerID="7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad" exitCode=0 Feb 20 00:32:37 crc kubenswrapper[5119]: I0220 00:32:37.168343 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkffc" event={"ID":"216a6b5b-f119-4842-a4bc-a2e09cd99513","Type":"ContainerDied","Data":"7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad"} Feb 20 00:32:37 crc kubenswrapper[5119]: I0220 00:32:37.207869 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92nq" event={"ID":"e6131c50-034a-422e-8039-4a39cfcbec6a","Type":"ContainerStarted","Data":"c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6"} Feb 20 00:32:37 crc kubenswrapper[5119]: I0220 00:32:37.234485 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m92nq" podStartSLOduration=5.553584396 podStartE2EDuration="6.234460225s" podCreationTimestamp="2026-02-20 00:32:31 +0000 UTC" firstStartedPulling="2026-02-20 00:32:33.111860851 +0000 UTC m=+1335.090825163" lastFinishedPulling="2026-02-20 00:32:33.79273669 +0000 UTC m=+1335.771700992" observedRunningTime="2026-02-20 00:32:37.227794157 +0000 UTC m=+1339.206758459" watchObservedRunningTime="2026-02-20 00:32:37.234460225 +0000 UTC m=+1339.213424557" Feb 20 00:32:38 crc kubenswrapper[5119]: I0220 00:32:38.220744 5119 generic.go:358] "Generic (PLEG): container finished" podID="216a6b5b-f119-4842-a4bc-a2e09cd99513" containerID="746db447470e577be102c4fe2cba3f3f738667188e7a92db24b9b23d77905c27" exitCode=0 Feb 20 00:32:38 crc kubenswrapper[5119]: I0220 00:32:38.220822 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkffc" event={"ID":"216a6b5b-f119-4842-a4bc-a2e09cd99513","Type":"ContainerDied","Data":"746db447470e577be102c4fe2cba3f3f738667188e7a92db24b9b23d77905c27"} Feb 20 00:32:39 crc kubenswrapper[5119]: I0220 00:32:39.234608 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkffc" event={"ID":"216a6b5b-f119-4842-a4bc-a2e09cd99513","Type":"ContainerStarted","Data":"595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf"} Feb 20 00:32:39 crc kubenswrapper[5119]: I0220 00:32:39.264516 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gkffc" podStartSLOduration=5.724822196 podStartE2EDuration="6.264497756s" podCreationTimestamp="2026-02-20 00:32:33 +0000 UTC" firstStartedPulling="2026-02-20 00:32:37.170790865 +0000 UTC m=+1339.149755197" lastFinishedPulling="2026-02-20 00:32:37.710466435 +0000 UTC m=+1339.689430757" observedRunningTime="2026-02-20 00:32:39.260984943 +0000 UTC m=+1341.239949275" watchObservedRunningTime="2026-02-20 00:32:39.264497756 +0000 UTC m=+1341.243462058" Feb 20 00:32:41 crc kubenswrapper[5119]: I0220 00:32:41.893242 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:41 crc kubenswrapper[5119]: I0220 00:32:41.893629 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:32:42 crc kubenswrapper[5119]: I0220 00:32:42.046788 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:42 crc kubenswrapper[5119]: I0220 00:32:42.046873 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:42 crc kubenswrapper[5119]: I0220 00:32:42.110305 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:42 crc kubenswrapper[5119]: I0220 00:32:42.160965 5119 patch_prober.go:28] interesting pod/machine-config-daemon-l7jjp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:32:42 crc kubenswrapper[5119]: I0220 00:32:42.161057 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:32:42 crc kubenswrapper[5119]: I0220 00:32:42.161119 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" Feb 20 00:32:42 crc kubenswrapper[5119]: I0220 00:32:42.161926 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eee996522fc4e847dec015a02c0a8dc42ceadd46ce75e3827be349ce7fa527e3"} pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:32:42 crc kubenswrapper[5119]: I0220 00:32:42.162034 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" podUID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerName="machine-config-daemon" containerID="cri-o://eee996522fc4e847dec015a02c0a8dc42ceadd46ce75e3827be349ce7fa527e3" gracePeriod=600 Feb 20 00:32:42 crc kubenswrapper[5119]: I0220 00:32:42.322781 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:42 crc kubenswrapper[5119]: I0220 00:32:42.942719 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m92nq" podUID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerName="registry-server" probeResult="failure" output=< Feb 20 00:32:42 crc kubenswrapper[5119]: timeout: failed to connect service ":50051" within 1s Feb 20 00:32:42 crc kubenswrapper[5119]: > Feb 20 00:32:43 crc kubenswrapper[5119]: I0220 00:32:43.102537 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h74cl"] Feb 20 00:32:43 crc kubenswrapper[5119]: I0220 00:32:43.271018 5119 generic.go:358] "Generic (PLEG): container finished" podID="02bd3ad7-f57a-48e9-86d9-ca9c36a7218d" containerID="eee996522fc4e847dec015a02c0a8dc42ceadd46ce75e3827be349ce7fa527e3" exitCode=0 Feb 20 00:32:43 crc kubenswrapper[5119]: I0220 00:32:43.271088 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerDied","Data":"eee996522fc4e847dec015a02c0a8dc42ceadd46ce75e3827be349ce7fa527e3"} Feb 20 00:32:43 crc kubenswrapper[5119]: I0220 00:32:43.271154 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7jjp" event={"ID":"02bd3ad7-f57a-48e9-86d9-ca9c36a7218d","Type":"ContainerStarted","Data":"9391591f88bfbdb6db5673b5dea7b70179d0315904856b6ffa7f257f1746d284"} Feb 20 00:32:43 crc kubenswrapper[5119]: I0220 00:32:43.271181 5119 scope.go:117] "RemoveContainer" containerID="a542021eb09857f8ffb4ca1336b877f58653295632332fbc3653b725007eaa36" Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.275185 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.277097 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.282668 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h74cl" podUID="184644d6-ae9f-4210-b8e1-56d249890787" containerName="registry-server" containerID="cri-o://c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3" gracePeriod=2 Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.343080 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.653764 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.778016 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-catalog-content\") pod \"184644d6-ae9f-4210-b8e1-56d249890787\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.778364 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkcn4\" (UniqueName: \"kubernetes.io/projected/184644d6-ae9f-4210-b8e1-56d249890787-kube-api-access-xkcn4\") pod \"184644d6-ae9f-4210-b8e1-56d249890787\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.778583 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-utilities\") pod \"184644d6-ae9f-4210-b8e1-56d249890787\" (UID: \"184644d6-ae9f-4210-b8e1-56d249890787\") " Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.779797 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-utilities" (OuterVolumeSpecName: "utilities") pod "184644d6-ae9f-4210-b8e1-56d249890787" (UID: "184644d6-ae9f-4210-b8e1-56d249890787"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.784616 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184644d6-ae9f-4210-b8e1-56d249890787-kube-api-access-xkcn4" (OuterVolumeSpecName: "kube-api-access-xkcn4") pod "184644d6-ae9f-4210-b8e1-56d249890787" (UID: "184644d6-ae9f-4210-b8e1-56d249890787"). InnerVolumeSpecName "kube-api-access-xkcn4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.850906 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "184644d6-ae9f-4210-b8e1-56d249890787" (UID: "184644d6-ae9f-4210-b8e1-56d249890787"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.881468 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.881531 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184644d6-ae9f-4210-b8e1-56d249890787-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:44 crc kubenswrapper[5119]: I0220 00:32:44.881604 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xkcn4\" (UniqueName: \"kubernetes.io/projected/184644d6-ae9f-4210-b8e1-56d249890787-kube-api-access-xkcn4\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.296059 5119 generic.go:358] "Generic (PLEG): container finished" podID="184644d6-ae9f-4210-b8e1-56d249890787" containerID="c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3" exitCode=0 Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.297365 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h74cl" Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.297853 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74cl" event={"ID":"184644d6-ae9f-4210-b8e1-56d249890787","Type":"ContainerDied","Data":"c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3"} Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.297888 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74cl" event={"ID":"184644d6-ae9f-4210-b8e1-56d249890787","Type":"ContainerDied","Data":"9c307bebd0bb1a47a6d641e09eb8ac4481433a4e65285acf70efaa3da7f4ec13"} Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.297910 5119 scope.go:117] "RemoveContainer" containerID="c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3" Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.322285 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h74cl"] Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.327798 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h74cl"] Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.329817 5119 scope.go:117] "RemoveContainer" containerID="db90ee71f751621c5a9d7c213acbe8d9d4a400c49a370e78794c1cb1b958b079" Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.354196 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.372206 5119 scope.go:117] "RemoveContainer" containerID="96b0e27ca97d104ad0ac5d377025383b12362189041276c23aa5d8f29011af71" Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.397336 5119 scope.go:117] "RemoveContainer" containerID="c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3" Feb 20 00:32:45 crc kubenswrapper[5119]: E0220 00:32:45.397964 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3\": container with ID starting with c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3 not found: ID does not exist" containerID="c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3" Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.398126 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3"} err="failed to get container status \"c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3\": rpc error: code = NotFound desc = could not find container \"c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3\": container with ID starting with c573debaa54e8723893aaeb8170510bdecd58e6d93a8a39a6e76d4ec51cbb9b3 not found: ID does not exist" Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.398257 5119 scope.go:117] "RemoveContainer" containerID="db90ee71f751621c5a9d7c213acbe8d9d4a400c49a370e78794c1cb1b958b079" Feb 20 00:32:45 crc kubenswrapper[5119]: E0220 00:32:45.398840 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db90ee71f751621c5a9d7c213acbe8d9d4a400c49a370e78794c1cb1b958b079\": container with ID starting with db90ee71f751621c5a9d7c213acbe8d9d4a400c49a370e78794c1cb1b958b079 not found: ID does not exist" containerID="db90ee71f751621c5a9d7c213acbe8d9d4a400c49a370e78794c1cb1b958b079" Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.398896 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db90ee71f751621c5a9d7c213acbe8d9d4a400c49a370e78794c1cb1b958b079"} err="failed to get container status \"db90ee71f751621c5a9d7c213acbe8d9d4a400c49a370e78794c1cb1b958b079\": rpc error: code = NotFound desc = could not find container \"db90ee71f751621c5a9d7c213acbe8d9d4a400c49a370e78794c1cb1b958b079\": container with ID starting with db90ee71f751621c5a9d7c213acbe8d9d4a400c49a370e78794c1cb1b958b079 not found: ID does not exist" Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.398931 5119 scope.go:117] "RemoveContainer" containerID="96b0e27ca97d104ad0ac5d377025383b12362189041276c23aa5d8f29011af71" Feb 20 00:32:45 crc kubenswrapper[5119]: E0220 00:32:45.399372 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96b0e27ca97d104ad0ac5d377025383b12362189041276c23aa5d8f29011af71\": container with ID starting with 96b0e27ca97d104ad0ac5d377025383b12362189041276c23aa5d8f29011af71 not found: ID does not exist" containerID="96b0e27ca97d104ad0ac5d377025383b12362189041276c23aa5d8f29011af71" Feb 20 00:32:45 crc kubenswrapper[5119]: I0220 00:32:45.399516 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96b0e27ca97d104ad0ac5d377025383b12362189041276c23aa5d8f29011af71"} err="failed to get container status \"96b0e27ca97d104ad0ac5d377025383b12362189041276c23aa5d8f29011af71\": rpc error: code = NotFound desc = could not find container \"96b0e27ca97d104ad0ac5d377025383b12362189041276c23aa5d8f29011af71\": container with ID starting with 96b0e27ca97d104ad0ac5d377025383b12362189041276c23aa5d8f29011af71 not found: ID does not exist" Feb 20 00:32:46 crc kubenswrapper[5119]: I0220 00:32:46.305670 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gkffc"] Feb 20 00:32:46 crc kubenswrapper[5119]: I0220 00:32:46.870997 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="184644d6-ae9f-4210-b8e1-56d249890787" path="/var/lib/kubelet/pods/184644d6-ae9f-4210-b8e1-56d249890787/volumes" Feb 20 00:32:47 crc kubenswrapper[5119]: I0220 00:32:47.318603 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gkffc" podUID="216a6b5b-f119-4842-a4bc-a2e09cd99513" containerName="registry-server" containerID="cri-o://595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf" gracePeriod=2 Feb 20 00:32:47 crc kubenswrapper[5119]: I0220 00:32:47.740836 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:47 crc kubenswrapper[5119]: I0220 00:32:47.834166 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-utilities\") pod \"216a6b5b-f119-4842-a4bc-a2e09cd99513\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " Feb 20 00:32:47 crc kubenswrapper[5119]: I0220 00:32:47.834256 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wmll\" (UniqueName: \"kubernetes.io/projected/216a6b5b-f119-4842-a4bc-a2e09cd99513-kube-api-access-6wmll\") pod \"216a6b5b-f119-4842-a4bc-a2e09cd99513\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " Feb 20 00:32:47 crc kubenswrapper[5119]: I0220 00:32:47.834381 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-catalog-content\") pod \"216a6b5b-f119-4842-a4bc-a2e09cd99513\" (UID: \"216a6b5b-f119-4842-a4bc-a2e09cd99513\") " Feb 20 00:32:47 crc kubenswrapper[5119]: I0220 00:32:47.835557 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-utilities" (OuterVolumeSpecName: "utilities") pod "216a6b5b-f119-4842-a4bc-a2e09cd99513" (UID: "216a6b5b-f119-4842-a4bc-a2e09cd99513"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:47 crc kubenswrapper[5119]: I0220 00:32:47.853054 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216a6b5b-f119-4842-a4bc-a2e09cd99513-kube-api-access-6wmll" (OuterVolumeSpecName: "kube-api-access-6wmll") pod "216a6b5b-f119-4842-a4bc-a2e09cd99513" (UID: "216a6b5b-f119-4842-a4bc-a2e09cd99513"). InnerVolumeSpecName "kube-api-access-6wmll". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:32:47 crc kubenswrapper[5119]: I0220 00:32:47.917190 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "216a6b5b-f119-4842-a4bc-a2e09cd99513" (UID: "216a6b5b-f119-4842-a4bc-a2e09cd99513"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:47 crc kubenswrapper[5119]: I0220 00:32:47.936653 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:47 crc kubenswrapper[5119]: I0220 00:32:47.937119 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/216a6b5b-f119-4842-a4bc-a2e09cd99513-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:47 crc kubenswrapper[5119]: I0220 00:32:47.937340 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6wmll\" (UniqueName: \"kubernetes.io/projected/216a6b5b-f119-4842-a4bc-a2e09cd99513-kube-api-access-6wmll\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.333948 5119 generic.go:358] "Generic (PLEG): container finished" podID="216a6b5b-f119-4842-a4bc-a2e09cd99513" containerID="595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf" exitCode=0 Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.334143 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkffc" event={"ID":"216a6b5b-f119-4842-a4bc-a2e09cd99513","Type":"ContainerDied","Data":"595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf"} Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.334222 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkffc" event={"ID":"216a6b5b-f119-4842-a4bc-a2e09cd99513","Type":"ContainerDied","Data":"9a4dd06a06794b8e18219e2d30760c57eb651a335a865829aba6bbe48250551a"} Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.334258 5119 scope.go:117] "RemoveContainer" containerID="595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf" Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.334435 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkffc" Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.364672 5119 scope.go:117] "RemoveContainer" containerID="746db447470e577be102c4fe2cba3f3f738667188e7a92db24b9b23d77905c27" Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.393222 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gkffc"] Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.399663 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gkffc"] Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.415308 5119 scope.go:117] "RemoveContainer" containerID="7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad" Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.448761 5119 scope.go:117] "RemoveContainer" containerID="595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf" Feb 20 00:32:48 crc kubenswrapper[5119]: E0220 00:32:48.449213 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf\": container with ID starting with 595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf not found: ID does not exist" containerID="595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf" Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.449253 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf"} err="failed to get container status \"595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf\": rpc error: code = NotFound desc = could not find container \"595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf\": container with ID starting with 595c9cfaa0bdec39f699b5217e26b49021cc2d7ecc6db852d27066bb241725bf not found: ID does not exist" Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.449282 5119 scope.go:117] "RemoveContainer" containerID="746db447470e577be102c4fe2cba3f3f738667188e7a92db24b9b23d77905c27" Feb 20 00:32:48 crc kubenswrapper[5119]: E0220 00:32:48.449522 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746db447470e577be102c4fe2cba3f3f738667188e7a92db24b9b23d77905c27\": container with ID starting with 746db447470e577be102c4fe2cba3f3f738667188e7a92db24b9b23d77905c27 not found: ID does not exist" containerID="746db447470e577be102c4fe2cba3f3f738667188e7a92db24b9b23d77905c27" Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.449606 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746db447470e577be102c4fe2cba3f3f738667188e7a92db24b9b23d77905c27"} err="failed to get container status \"746db447470e577be102c4fe2cba3f3f738667188e7a92db24b9b23d77905c27\": rpc error: code = NotFound desc = could not find container \"746db447470e577be102c4fe2cba3f3f738667188e7a92db24b9b23d77905c27\": container with ID starting with 746db447470e577be102c4fe2cba3f3f738667188e7a92db24b9b23d77905c27 not found: ID does not exist" Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.449632 5119 scope.go:117] "RemoveContainer" containerID="7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad" Feb 20 00:32:48 crc kubenswrapper[5119]: E0220 00:32:48.449882 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad\": container with ID starting with 7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad not found: ID does not exist" containerID="7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad" Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.449900 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad"} err="failed to get container status \"7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad\": rpc error: code = NotFound desc = could not find container \"7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad\": container with ID starting with 7c6e42d304a04f52a51ca233d9acb47a611341bc43623264c76f7108cfa99fad not found: ID does not exist" Feb 20 00:32:48 crc kubenswrapper[5119]: I0220 00:32:48.876688 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216a6b5b-f119-4842-a4bc-a2e09cd99513" path="/var/lib/kubelet/pods/216a6b5b-f119-4842-a4bc-a2e09cd99513/volumes" Feb 20 00:32:52 crc kubenswrapper[5119]: I0220 00:32:52.963627 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m92nq" podUID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerName="registry-server" probeResult="failure" output=< Feb 20 00:32:52 crc kubenswrapper[5119]: timeout: failed to connect service ":50051" within 1s Feb 20 00:32:52 crc kubenswrapper[5119]: > Feb 20 00:33:01 crc kubenswrapper[5119]: I0220 00:33:01.957506 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:33:02 crc kubenswrapper[5119]: I0220 00:33:02.006187 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:33:02 crc kubenswrapper[5119]: I0220 00:33:02.731398 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m92nq"] Feb 20 00:33:03 crc kubenswrapper[5119]: I0220 00:33:03.488109 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m92nq" podUID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerName="registry-server" containerID="cri-o://c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6" gracePeriod=2 Feb 20 00:33:03 crc kubenswrapper[5119]: I0220 00:33:03.963496 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.032895 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-catalog-content\") pod \"e6131c50-034a-422e-8039-4a39cfcbec6a\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.033005 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-utilities\") pod \"e6131c50-034a-422e-8039-4a39cfcbec6a\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.033177 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hjvq\" (UniqueName: \"kubernetes.io/projected/e6131c50-034a-422e-8039-4a39cfcbec6a-kube-api-access-6hjvq\") pod \"e6131c50-034a-422e-8039-4a39cfcbec6a\" (UID: \"e6131c50-034a-422e-8039-4a39cfcbec6a\") " Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.035738 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-utilities" (OuterVolumeSpecName: "utilities") pod "e6131c50-034a-422e-8039-4a39cfcbec6a" (UID: "e6131c50-034a-422e-8039-4a39cfcbec6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.043134 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6131c50-034a-422e-8039-4a39cfcbec6a-kube-api-access-6hjvq" (OuterVolumeSpecName: "kube-api-access-6hjvq") pod "e6131c50-034a-422e-8039-4a39cfcbec6a" (UID: "e6131c50-034a-422e-8039-4a39cfcbec6a"). InnerVolumeSpecName "kube-api-access-6hjvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.135668 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.135723 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6hjvq\" (UniqueName: \"kubernetes.io/projected/e6131c50-034a-422e-8039-4a39cfcbec6a-kube-api-access-6hjvq\") on node \"crc\" DevicePath \"\"" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.199118 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6131c50-034a-422e-8039-4a39cfcbec6a" (UID: "e6131c50-034a-422e-8039-4a39cfcbec6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.236961 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6131c50-034a-422e-8039-4a39cfcbec6a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.503723 5119 generic.go:358] "Generic (PLEG): container finished" podID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerID="c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6" exitCode=0 Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.503904 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m92nq" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.503976 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92nq" event={"ID":"e6131c50-034a-422e-8039-4a39cfcbec6a","Type":"ContainerDied","Data":"c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6"} Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.504609 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92nq" event={"ID":"e6131c50-034a-422e-8039-4a39cfcbec6a","Type":"ContainerDied","Data":"40880aee542339a4724fb155d4ee9ee48f242eca5301d903f1b3f715f1872acf"} Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.504668 5119 scope.go:117] "RemoveContainer" containerID="c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.537668 5119 scope.go:117] "RemoveContainer" containerID="a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.560307 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m92nq"] Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.570377 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m92nq"] Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.592116 5119 scope.go:117] "RemoveContainer" containerID="ea17088018d9310207804f340fd1f5c2610ac45b7ac66edc803965a96f4522de" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.626019 5119 scope.go:117] "RemoveContainer" containerID="c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6" Feb 20 00:33:04 crc kubenswrapper[5119]: E0220 00:33:04.626669 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6\": container with ID starting with c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6 not found: ID does not exist" containerID="c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.626743 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6"} err="failed to get container status \"c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6\": rpc error: code = NotFound desc = could not find container \"c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6\": container with ID starting with c9667d1b720d5cc4fcf69c1fc16796d0599a4117953dc5304baa61065710bcc6 not found: ID does not exist" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.626785 5119 scope.go:117] "RemoveContainer" containerID="a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852" Feb 20 00:33:04 crc kubenswrapper[5119]: E0220 00:33:04.627334 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852\": container with ID starting with a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852 not found: ID does not exist" containerID="a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.627399 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852"} err="failed to get container status \"a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852\": rpc error: code = NotFound desc = could not find container \"a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852\": container with ID starting with a0304d4a0282d54bd9a8f6a4be92ff1c865198126356eeae07bf4b819f95a852 not found: ID does not exist" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.627444 5119 scope.go:117] "RemoveContainer" containerID="ea17088018d9310207804f340fd1f5c2610ac45b7ac66edc803965a96f4522de" Feb 20 00:33:04 crc kubenswrapper[5119]: E0220 00:33:04.627971 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea17088018d9310207804f340fd1f5c2610ac45b7ac66edc803965a96f4522de\": container with ID starting with ea17088018d9310207804f340fd1f5c2610ac45b7ac66edc803965a96f4522de not found: ID does not exist" containerID="ea17088018d9310207804f340fd1f5c2610ac45b7ac66edc803965a96f4522de" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.628036 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea17088018d9310207804f340fd1f5c2610ac45b7ac66edc803965a96f4522de"} err="failed to get container status \"ea17088018d9310207804f340fd1f5c2610ac45b7ac66edc803965a96f4522de\": rpc error: code = NotFound desc = could not find container \"ea17088018d9310207804f340fd1f5c2610ac45b7ac66edc803965a96f4522de\": container with ID starting with ea17088018d9310207804f340fd1f5c2610ac45b7ac66edc803965a96f4522de not found: ID does not exist" Feb 20 00:33:04 crc kubenswrapper[5119]: I0220 00:33:04.874364 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6131c50-034a-422e-8039-4a39cfcbec6a" path="/var/lib/kubelet/pods/e6131c50-034a-422e-8039-4a39cfcbec6a/volumes" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.149259 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525794-pghb9"] Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150712 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="184644d6-ae9f-4210-b8e1-56d249890787" containerName="extract-utilities" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150729 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="184644d6-ae9f-4210-b8e1-56d249890787" containerName="extract-utilities" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150741 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerName="registry-server" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150749 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerName="registry-server" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150784 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="216a6b5b-f119-4842-a4bc-a2e09cd99513" containerName="extract-utilities" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150791 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="216a6b5b-f119-4842-a4bc-a2e09cd99513" containerName="extract-utilities" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150806 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerName="extract-utilities" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150814 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerName="extract-utilities" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150837 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="184644d6-ae9f-4210-b8e1-56d249890787" containerName="registry-server" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150844 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="184644d6-ae9f-4210-b8e1-56d249890787" containerName="registry-server" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150855 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerName="extract-content" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150862 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerName="extract-content" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150874 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="216a6b5b-f119-4842-a4bc-a2e09cd99513" containerName="extract-content" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150882 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="216a6b5b-f119-4842-a4bc-a2e09cd99513" containerName="extract-content" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150895 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="216a6b5b-f119-4842-a4bc-a2e09cd99513" containerName="registry-server" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150903 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="216a6b5b-f119-4842-a4bc-a2e09cd99513" containerName="registry-server" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150918 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="184644d6-ae9f-4210-b8e1-56d249890787" containerName="extract-content" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.150925 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="184644d6-ae9f-4210-b8e1-56d249890787" containerName="extract-content" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.151073 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="184644d6-ae9f-4210-b8e1-56d249890787" containerName="registry-server" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.151091 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e6131c50-034a-422e-8039-4a39cfcbec6a" containerName="registry-server" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.151103 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="216a6b5b-f119-4842-a4bc-a2e09cd99513" containerName="registry-server" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.158510 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525794-pghb9" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.162889 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525794-pghb9"] Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.164457 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-nmc85\"" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.164619 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.165698 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.336083 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fn2s\" (UniqueName: \"kubernetes.io/projected/d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06-kube-api-access-4fn2s\") pod \"auto-csr-approver-29525794-pghb9\" (UID: \"d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06\") " pod="openshift-infra/auto-csr-approver-29525794-pghb9" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.438515 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4fn2s\" (UniqueName: \"kubernetes.io/projected/d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06-kube-api-access-4fn2s\") pod \"auto-csr-approver-29525794-pghb9\" (UID: \"d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06\") " pod="openshift-infra/auto-csr-approver-29525794-pghb9" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.467075 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fn2s\" (UniqueName: \"kubernetes.io/projected/d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06-kube-api-access-4fn2s\") pod \"auto-csr-approver-29525794-pghb9\" (UID: \"d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06\") " pod="openshift-infra/auto-csr-approver-29525794-pghb9" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.494897 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525794-pghb9" Feb 20 00:34:00 crc kubenswrapper[5119]: I0220 00:34:00.776288 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525794-pghb9"] Feb 20 00:34:00 crc kubenswrapper[5119]: W0220 00:34:00.788329 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd95ae798_6fe7_4c00_bcd6_ec3f44d0cb06.slice/crio-dafc850cc5604816a69788de69731a63b0a7bab721fe9bc8567d03c74472658a WatchSource:0}: Error finding container dafc850cc5604816a69788de69731a63b0a7bab721fe9bc8567d03c74472658a: Status 404 returned error can't find the container with id dafc850cc5604816a69788de69731a63b0a7bab721fe9bc8567d03c74472658a Feb 20 00:34:01 crc kubenswrapper[5119]: I0220 00:34:01.074439 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525794-pghb9" event={"ID":"d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06","Type":"ContainerStarted","Data":"dafc850cc5604816a69788de69731a63b0a7bab721fe9bc8567d03c74472658a"} Feb 20 00:34:03 crc kubenswrapper[5119]: I0220 00:34:03.095214 5119 generic.go:358] "Generic (PLEG): container finished" podID="d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06" containerID="67bcab905d2d1e573f52a48223adc625f23a4ed1d57af3b3e6519e08e10b6cf3" exitCode=0 Feb 20 00:34:03 crc kubenswrapper[5119]: I0220 00:34:03.095320 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525794-pghb9" event={"ID":"d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06","Type":"ContainerDied","Data":"67bcab905d2d1e573f52a48223adc625f23a4ed1d57af3b3e6519e08e10b6cf3"} Feb 20 00:34:04 crc kubenswrapper[5119]: I0220 00:34:04.414380 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525794-pghb9" Feb 20 00:34:04 crc kubenswrapper[5119]: I0220 00:34:04.504176 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fn2s\" (UniqueName: \"kubernetes.io/projected/d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06-kube-api-access-4fn2s\") pod \"d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06\" (UID: \"d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06\") " Feb 20 00:34:04 crc kubenswrapper[5119]: I0220 00:34:04.512715 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06-kube-api-access-4fn2s" (OuterVolumeSpecName: "kube-api-access-4fn2s") pod "d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06" (UID: "d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06"). InnerVolumeSpecName "kube-api-access-4fn2s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:34:04 crc kubenswrapper[5119]: I0220 00:34:04.607795 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4fn2s\" (UniqueName: \"kubernetes.io/projected/d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06-kube-api-access-4fn2s\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:05 crc kubenswrapper[5119]: I0220 00:34:05.126168 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525794-pghb9" Feb 20 00:34:05 crc kubenswrapper[5119]: I0220 00:34:05.126205 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525794-pghb9" event={"ID":"d95ae798-6fe7-4c00-bcd6-ec3f44d0cb06","Type":"ContainerDied","Data":"dafc850cc5604816a69788de69731a63b0a7bab721fe9bc8567d03c74472658a"} Feb 20 00:34:05 crc kubenswrapper[5119]: I0220 00:34:05.126669 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dafc850cc5604816a69788de69731a63b0a7bab721fe9bc8567d03c74472658a" Feb 20 00:34:05 crc kubenswrapper[5119]: I0220 00:34:05.499932 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525788-6x4mn"] Feb 20 00:34:05 crc kubenswrapper[5119]: I0220 00:34:05.507681 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525788-6x4mn"] Feb 20 00:34:06 crc kubenswrapper[5119]: I0220 00:34:06.869475 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f070b6-1c0e-49b5-bbc8-938761ce678c" path="/var/lib/kubelet/pods/68f070b6-1c0e-49b5-bbc8-938761ce678c/volumes"