Jan 21 09:54:44 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 09:54:44 crc kubenswrapper[5119]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 09:54:44 crc kubenswrapper[5119]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 09:54:44 crc kubenswrapper[5119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 09:54:44 crc kubenswrapper[5119]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 09:54:44 crc kubenswrapper[5119]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 09:54:44 crc kubenswrapper[5119]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.433241 5119 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436471 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436489 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436493 5119 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436497 5119 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436500 5119 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436504 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436507 5119 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436510 5119 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436514 5119 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436517 5119 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436520 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436524 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436527 5119 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436530 5119 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436534 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436537 5119 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436540 5119 feature_gate.go:328] unrecognized feature gate: Example Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436543 5119 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436546 5119 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436550 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436553 5119 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436558 5119 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436561 5119 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436564 5119 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436567 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436571 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436574 5119 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436577 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436580 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436583 5119 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436586 5119 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436589 5119 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436593 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436596 5119 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436599 5119 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436622 5119 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436625 5119 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436630 5119 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436634 5119 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436641 5119 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436645 5119 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436648 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436652 5119 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436655 5119 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436659 5119 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436662 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436665 5119 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436668 5119 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436672 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436675 5119 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436678 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436681 5119 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436684 5119 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436690 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436695 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436700 5119 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436703 5119 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436707 5119 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436710 5119 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436713 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436717 5119 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436721 5119 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436725 5119 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436728 5119 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436732 5119 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436735 5119 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436738 5119 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436741 5119 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436744 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436747 5119 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436751 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436755 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436758 5119 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436762 5119 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436765 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436768 5119 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436771 5119 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436774 5119 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436777 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436781 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436785 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436788 5119 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436791 5119 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436794 5119 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436797 5119 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.436802 5119 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437276 5119 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437283 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437287 5119 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437290 5119 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437294 5119 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437298 5119 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437301 5119 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437305 5119 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437308 5119 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437312 5119 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437315 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437318 5119 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437321 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437325 5119 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437328 5119 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437331 5119 feature_gate.go:328] unrecognized feature gate: Example Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437334 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437340 5119 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437343 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437346 5119 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437349 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437352 5119 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437357 5119 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437361 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437364 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437367 5119 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437370 5119 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437374 5119 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437377 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437380 5119 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437384 5119 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437387 5119 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437390 5119 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437399 5119 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437403 5119 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437406 5119 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437409 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437413 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437416 5119 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437419 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437422 5119 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437425 5119 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437428 5119 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437431 5119 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437434 5119 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437437 5119 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437442 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437445 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437448 5119 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437453 5119 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437456 5119 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437459 5119 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437462 5119 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437466 5119 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437469 5119 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437472 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437475 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437478 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437481 5119 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437485 5119 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437488 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437491 5119 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437495 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437498 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437501 5119 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437504 5119 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437513 5119 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437516 5119 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437519 5119 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437524 5119 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437528 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437531 5119 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437535 5119 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437538 5119 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437541 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437544 5119 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437547 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437550 5119 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437554 5119 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437557 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437560 5119 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437564 5119 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437567 5119 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437572 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437575 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.437578 5119 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437857 5119 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437869 5119 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437885 5119 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437891 5119 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437897 5119 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437902 5119 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437909 5119 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437916 5119 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437921 5119 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437925 5119 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437931 5119 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437935 5119 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437940 5119 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437951 5119 flags.go:64] FLAG: --cgroup-root="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437955 5119 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437958 5119 flags.go:64] FLAG: --client-ca-file="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437962 5119 flags.go:64] FLAG: --cloud-config="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437966 5119 flags.go:64] FLAG: --cloud-provider="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437969 5119 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437977 5119 flags.go:64] FLAG: --cluster-domain="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437981 5119 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437984 5119 flags.go:64] FLAG: --config-dir="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437988 5119 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437992 5119 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.437996 5119 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438000 5119 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438004 5119 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438011 5119 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438015 5119 flags.go:64] FLAG: --contention-profiling="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438019 5119 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438033 5119 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438037 5119 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438041 5119 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438047 5119 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438050 5119 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438054 5119 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438058 5119 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438061 5119 flags.go:64] FLAG: --enable-server="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438067 5119 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438074 5119 flags.go:64] FLAG: --event-burst="100" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438078 5119 flags.go:64] FLAG: --event-qps="50" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438082 5119 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438085 5119 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438089 5119 flags.go:64] FLAG: --eviction-hard="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438093 5119 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438097 5119 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438106 5119 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438110 5119 flags.go:64] FLAG: --eviction-soft="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438114 5119 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438118 5119 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438121 5119 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438125 5119 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438128 5119 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438132 5119 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438136 5119 flags.go:64] FLAG: --feature-gates="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438141 5119 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438144 5119 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438148 5119 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438152 5119 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438157 5119 flags.go:64] FLAG: --healthz-port="10248" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438160 5119 flags.go:64] FLAG: --help="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438164 5119 flags.go:64] FLAG: --hostname-override="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438167 5119 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438171 5119 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438174 5119 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438178 5119 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438182 5119 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438185 5119 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438189 5119 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438192 5119 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438198 5119 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438202 5119 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438206 5119 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438209 5119 flags.go:64] FLAG: --kube-reserved="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438213 5119 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438216 5119 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438220 5119 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438224 5119 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438227 5119 flags.go:64] FLAG: --lock-file="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438244 5119 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438248 5119 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438251 5119 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438257 5119 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438260 5119 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438264 5119 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438267 5119 flags.go:64] FLAG: --logging-format="text" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438271 5119 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438275 5119 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438280 5119 flags.go:64] FLAG: --manifest-url="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438284 5119 flags.go:64] FLAG: --manifest-url-header="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438298 5119 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438308 5119 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438314 5119 flags.go:64] FLAG: --max-pods="110" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438318 5119 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438323 5119 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438327 5119 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438330 5119 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438334 5119 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438337 5119 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438341 5119 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438350 5119 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438354 5119 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438361 5119 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438365 5119 flags.go:64] FLAG: --pod-cidr="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438370 5119 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438386 5119 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438391 5119 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438396 5119 flags.go:64] FLAG: --pods-per-core="0" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438418 5119 flags.go:64] FLAG: --port="10250" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438422 5119 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438427 5119 flags.go:64] FLAG: --provider-id="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438431 5119 flags.go:64] FLAG: --qos-reserved="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438451 5119 flags.go:64] FLAG: --read-only-port="10255" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438456 5119 flags.go:64] FLAG: --register-node="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438460 5119 flags.go:64] FLAG: --register-schedulable="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438465 5119 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438472 5119 flags.go:64] FLAG: --registry-burst="10" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438476 5119 flags.go:64] FLAG: --registry-qps="5" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438479 5119 flags.go:64] FLAG: --reserved-cpus="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438483 5119 flags.go:64] FLAG: --reserved-memory="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438487 5119 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438490 5119 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438494 5119 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438498 5119 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438502 5119 flags.go:64] FLAG: --runonce="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438505 5119 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438509 5119 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438513 5119 flags.go:64] FLAG: --seccomp-default="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438516 5119 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438520 5119 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438523 5119 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438527 5119 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438531 5119 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438534 5119 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438540 5119 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438543 5119 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438547 5119 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438550 5119 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438554 5119 flags.go:64] FLAG: --system-cgroups="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438557 5119 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438563 5119 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438566 5119 flags.go:64] FLAG: --tls-cert-file="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438570 5119 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438578 5119 flags.go:64] FLAG: --tls-min-version="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438581 5119 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438591 5119 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438594 5119 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438598 5119 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438602 5119 flags.go:64] FLAG: --v="2" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438621 5119 flags.go:64] FLAG: --version="false" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438627 5119 flags.go:64] FLAG: --vmodule="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438631 5119 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.438635 5119 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438806 5119 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438813 5119 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438818 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438823 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438827 5119 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438831 5119 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438837 5119 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438841 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438846 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438850 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438854 5119 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438858 5119 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438862 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438869 5119 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438873 5119 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438876 5119 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438880 5119 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438884 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438888 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438892 5119 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438896 5119 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438900 5119 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438903 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438907 5119 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438911 5119 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438923 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438928 5119 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438932 5119 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438936 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438940 5119 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438944 5119 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438948 5119 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438952 5119 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438955 5119 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438959 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438963 5119 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438967 5119 feature_gate.go:328] unrecognized feature gate: Example Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438971 5119 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438974 5119 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438978 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438982 5119 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438986 5119 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438990 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438994 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.438998 5119 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439002 5119 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439006 5119 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439010 5119 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439013 5119 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439017 5119 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439021 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439025 5119 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439029 5119 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439033 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439036 5119 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439040 5119 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439044 5119 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439047 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439059 5119 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439063 5119 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439068 5119 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439074 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439078 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439082 5119 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439085 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439089 5119 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439093 5119 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439097 5119 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439101 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439105 5119 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439108 5119 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439112 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439116 5119 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439120 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439124 5119 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439128 5119 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439132 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439136 5119 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439140 5119 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439144 5119 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439148 5119 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439152 5119 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439156 5119 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439160 5119 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439164 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.439168 5119 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.439183 5119 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.448461 5119 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.448501 5119 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448633 5119 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448653 5119 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448662 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448670 5119 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448678 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448686 5119 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448693 5119 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448702 5119 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448709 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448716 5119 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448723 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448730 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448737 5119 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448745 5119 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448751 5119 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448759 5119 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448766 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448773 5119 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448780 5119 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448787 5119 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448794 5119 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448801 5119 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448808 5119 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448816 5119 feature_gate.go:328] unrecognized feature gate: Example Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448823 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448831 5119 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448838 5119 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448846 5119 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448854 5119 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448861 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448868 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448876 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448883 5119 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448890 5119 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448900 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448907 5119 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448917 5119 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448925 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448933 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448941 5119 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448948 5119 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448955 5119 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448962 5119 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448969 5119 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448976 5119 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448983 5119 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.448993 5119 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449003 5119 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449011 5119 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449019 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449026 5119 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449035 5119 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449042 5119 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449050 5119 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449059 5119 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449066 5119 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449073 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449080 5119 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449087 5119 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449095 5119 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449102 5119 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449109 5119 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449117 5119 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449124 5119 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449131 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449138 5119 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449145 5119 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449153 5119 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449160 5119 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449167 5119 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449175 5119 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449183 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449190 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449197 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449203 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449211 5119 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449218 5119 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449225 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449232 5119 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449239 5119 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449247 5119 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449254 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449262 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449269 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449276 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449283 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.449296 5119 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449487 5119 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449501 5119 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449510 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449519 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449527 5119 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449536 5119 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449545 5119 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449554 5119 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449563 5119 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449570 5119 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449578 5119 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449585 5119 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449593 5119 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449623 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449632 5119 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449641 5119 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449648 5119 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449656 5119 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449662 5119 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449670 5119 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449677 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449685 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449692 5119 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449699 5119 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449706 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449713 5119 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449720 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449727 5119 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449734 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449741 5119 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449748 5119 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449756 5119 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449763 5119 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449770 5119 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449778 5119 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449785 5119 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449791 5119 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449799 5119 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449805 5119 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449812 5119 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449820 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449827 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449834 5119 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449841 5119 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449848 5119 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449855 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449863 5119 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449870 5119 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449877 5119 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449884 5119 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449892 5119 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449899 5119 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449906 5119 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449913 5119 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449920 5119 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449928 5119 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449936 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449943 5119 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449950 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449957 5119 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449964 5119 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449972 5119 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449979 5119 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449985 5119 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.449992 5119 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450000 5119 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450007 5119 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450014 5119 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450021 5119 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450029 5119 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450036 5119 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450043 5119 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450050 5119 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450057 5119 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450064 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450070 5119 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450078 5119 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450085 5119 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450092 5119 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450099 5119 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450106 5119 feature_gate.go:328] unrecognized feature gate: Example Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450114 5119 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450121 5119 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450128 5119 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450135 5119 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 09:54:44 crc kubenswrapper[5119]: W0121 09:54:44.450142 5119 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.450154 5119 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.450388 5119 server.go:962] "Client rotation is on, will bootstrap in background" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.453631 5119 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.456984 5119 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.457121 5119 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.458531 5119 server.go:1019] "Starting client certificate rotation" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.458645 5119 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.458717 5119 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.464116 5119 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.465638 5119 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.465711 5119 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.472470 5119 log.go:25] "Validated CRI v1 runtime API" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.491234 5119 log.go:25] "Validated CRI v1 image API" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.493002 5119 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.496193 5119 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-21-09-48-42-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.496220 5119 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.508547 5119 manager.go:217] Machine: {Timestamp:2026-01-21 09:54:44.507385632 +0000 UTC m=+0.175477330 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649934336 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:53b028bb-3049-4b24-a969-520f108bc223 BootID:22713055-0a0e-45db-9c06-ee5e114186db Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824967168 Type:vfs Inodes:4107658 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729990144 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107658 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8e:63:e5 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:8e:63:e5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e3:3d:fe Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a9:09:34 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:62:40:82 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:37:b0:c2 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:26:79:a2:79:58:71 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:de:64:c7:08:14:96 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649934336 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.508988 5119 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.509130 5119 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.510238 5119 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.510302 5119 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.510571 5119 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.510586 5119 container_manager_linux.go:306] "Creating device plugin manager" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.510642 5119 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.510883 5119 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.511314 5119 state_mem.go:36] "Initialized new in-memory state store" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.511509 5119 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.512236 5119 kubelet.go:491] "Attempting to sync node with API server" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.512277 5119 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.512309 5119 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.512325 5119 kubelet.go:397] "Adding apiserver pod source" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.512347 5119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.516731 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.516753 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.517793 5119 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.517831 5119 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.519699 5119 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.519731 5119 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.521781 5119 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.521975 5119 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522382 5119 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522818 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522838 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522845 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522852 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522859 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522865 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522872 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522878 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522887 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522899 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.522911 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.523061 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.523283 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.523296 5119 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.524264 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.533163 5119 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.533213 5119 server.go:1295] "Started kubelet" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.533374 5119 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.533446 5119 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.533520 5119 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.533869 5119 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 09:54:44 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.535471 5119 server.go:317] "Adding debug handlers to kubelet server" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.536142 5119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.536736 5119 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.536784 5119 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.536802 5119 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.536862 5119 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.536998 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.536729 5119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cb661bc129866 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.533180518 +0000 UTC m=+0.201272196,LastTimestamp:2026-01-21 09:54:44.533180518 +0000 UTC m=+0.201272196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.542227 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.542901 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.545166 5119 factory.go:153] Registering CRI-O factory Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.545437 5119 factory.go:223] Registration of the crio container factory successfully Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.545501 5119 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.545511 5119 factory.go:55] Registering systemd factory Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.545545 5119 factory.go:223] Registration of the systemd container factory successfully Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.545563 5119 factory.go:103] Registering Raw factory Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.545577 5119 manager.go:1196] Started watching for new ooms in manager Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.547420 5119 manager.go:319] Starting recovery of all containers Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578340 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578406 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578419 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578430 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578440 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578448 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578457 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578465 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578474 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578484 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578491 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578498 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578507 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578516 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578526 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578534 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578542 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578550 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578557 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578564 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578571 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578579 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578587 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578595 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578651 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578662 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578671 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578679 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578689 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578697 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578705 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578722 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578731 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578740 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578748 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578756 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578764 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578771 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578779 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578786 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578798 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578808 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578828 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578844 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578855 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578864 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578873 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578883 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578891 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.578900 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580192 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580205 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580217 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580229 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580240 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580251 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580269 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580280 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580291 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580304 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580315 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580326 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580336 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580348 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580358 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580375 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580387 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580399 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580416 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580429 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580440 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580452 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580464 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580474 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580485 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580496 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580507 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580519 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580530 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580542 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580555 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580565 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580576 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580586 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580614 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580629 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580640 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580651 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580662 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580673 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580684 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580695 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580705 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580716 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580727 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580740 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580751 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580763 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580774 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580785 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580795 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580809 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580819 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580830 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580841 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580852 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580864 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580875 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580886 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580896 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580906 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580918 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580940 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580959 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580970 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580980 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.580990 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581000 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581053 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581064 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581074 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581084 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581096 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581106 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581118 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581129 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581140 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581152 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581163 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581175 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581186 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581197 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581208 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581221 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581233 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581244 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581255 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581265 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581277 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581318 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581330 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581341 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581352 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581363 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581375 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581386 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581396 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581407 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581418 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581429 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581443 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581454 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581464 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581474 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581501 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581515 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581526 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581536 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581548 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581559 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581570 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581581 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581592 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581620 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581632 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581647 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581657 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.581668 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.582595 5119 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.582655 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.582695 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.582706 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.582716 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.582730 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.582740 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.582750 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.582761 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.582772 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.582782 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583132 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583147 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583160 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583173 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583185 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583197 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583211 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583223 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583235 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583247 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583258 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583271 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583283 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583296 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583307 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583320 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583331 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583342 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583356 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583369 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583383 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583395 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583407 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583419 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583431 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583443 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583455 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583469 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583482 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583493 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583505 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583518 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583530 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583541 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583553 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583565 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583577 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583588 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583600 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583642 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583657 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583669 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583680 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583691 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583702 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583713 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583755 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583790 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583804 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583815 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583827 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583838 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583849 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583859 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583871 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583890 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583901 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583913 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583923 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583933 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583948 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583959 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583969 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583980 5119 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583991 5119 reconstruct.go:97] "Volume reconstruction finished" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.583998 5119 reconciler.go:26] "Reconciler: start to sync state" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.587500 5119 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.589230 5119 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.589293 5119 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.589561 5119 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.589633 5119 kubelet.go:2451] "Starting kubelet main sync loop" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.589720 5119 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.590965 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.593859 5119 manager.go:324] Recovery completed Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.605093 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.606389 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.606427 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.607425 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.608940 5119 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.608980 5119 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.609009 5119 state_mem.go:36] "Initialized new in-memory state store" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.613086 5119 policy_none.go:49] "None policy: Start" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.613114 5119 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.613133 5119 state_mem.go:35] "Initializing new in-memory state store" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.637669 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.651948 5119 manager.go:341] "Starting Device Plugin manager" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.652033 5119 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.652053 5119 server.go:85] "Starting device plugin registration server" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.652629 5119 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.652656 5119 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.652893 5119 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.653045 5119 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.653066 5119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.656081 5119 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.656168 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.690254 5119 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.690522 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.691237 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.691280 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.691293 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.692005 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.692361 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.692465 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.692507 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.692538 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.692551 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.693356 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.693380 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.693389 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.693408 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.693543 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.693576 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.693945 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.693987 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.694001 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.694538 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.694571 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.694587 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.694862 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.694974 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.695015 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.695472 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.695497 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.695510 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.695497 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.695595 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.695616 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.696109 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.696229 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.696270 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.696747 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.696766 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.696775 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.696798 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.696822 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.696835 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.697296 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.697320 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.697987 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.698022 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.698036 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.721386 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.735808 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.743008 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.743946 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.753022 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.754053 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.754110 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.754128 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.754177 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.754764 5119 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.786482 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.786859 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.786884 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787186 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787253 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787272 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787381 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787414 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787437 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787455 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787625 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787704 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787779 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787811 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787852 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787871 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787886 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787901 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787915 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.788078 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.788102 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.787932 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.788157 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.788182 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.788201 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.788240 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.788259 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.788278 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.788349 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.788785 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.789023 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.793886 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889336 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889405 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889428 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889460 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889480 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889503 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889521 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889546 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889563 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889569 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889676 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889700 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889726 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889730 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889760 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889757 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889788 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889799 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889825 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889834 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889649 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889849 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889876 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889514 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889894 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889915 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889944 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.889973 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.890000 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.890029 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.890060 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.890090 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.955110 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.956132 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.956180 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.956193 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:44 crc kubenswrapper[5119]: I0121 09:54:44.956225 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:54:44 crc kubenswrapper[5119]: E0121 09:54:44.956685 5119 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.022043 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.036932 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.044193 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:45 crc kubenswrapper[5119]: W0121 09:54:45.046709 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-87afb3511bb8aa8067e6600879e7a10c2050058d57354ee2c0a7724ad8b78b81 WatchSource:0}: Error finding container 87afb3511bb8aa8067e6600879e7a10c2050058d57354ee2c0a7724ad8b78b81: Status 404 returned error can't find the container with id 87afb3511bb8aa8067e6600879e7a10c2050058d57354ee2c0a7724ad8b78b81 Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.056861 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 09:54:45 crc kubenswrapper[5119]: W0121 09:54:45.057396 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-930d9d425a870205004eeff655ad6a56e47b59926ee092b8073fe7bbbdcf674e WatchSource:0}: Error finding container 930d9d425a870205004eeff655ad6a56e47b59926ee092b8073fe7bbbdcf674e: Status 404 returned error can't find the container with id 930d9d425a870205004eeff655ad6a56e47b59926ee092b8073fe7bbbdcf674e Jan 21 09:54:45 crc kubenswrapper[5119]: W0121 09:54:45.072055 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-45ee6229ac063f9b16921094ea4fdace2a6f019642b83e58ffe364a6aa4c21d8 WatchSource:0}: Error finding container 45ee6229ac063f9b16921094ea4fdace2a6f019642b83e58ffe364a6aa4c21d8: Status 404 returned error can't find the container with id 45ee6229ac063f9b16921094ea4fdace2a6f019642b83e58ffe364a6aa4c21d8 Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.089339 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.094389 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:54:45 crc kubenswrapper[5119]: W0121 09:54:45.114137 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-594719ef43b037a1e05f08edc6445230a7393deaa003596f36cee7e498bfb1c1 WatchSource:0}: Error finding container 594719ef43b037a1e05f08edc6445230a7393deaa003596f36cee7e498bfb1c1: Status 404 returned error can't find the container with id 594719ef43b037a1e05f08edc6445230a7393deaa003596f36cee7e498bfb1c1 Jan 21 09:54:45 crc kubenswrapper[5119]: W0121 09:54:45.116726 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-cd899883716389538bc5d900d332ff8a852b3834adb24ae2d528d1d4ce00f228 WatchSource:0}: Error finding container cd899883716389538bc5d900d332ff8a852b3834adb24ae2d528d1d4ce00f228: Status 404 returned error can't find the container with id cd899883716389538bc5d900d332ff8a852b3834adb24ae2d528d1d4ce00f228 Jan 21 09:54:45 crc kubenswrapper[5119]: E0121 09:54:45.145338 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.357094 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.360420 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.360477 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.360497 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.360525 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:54:45 crc kubenswrapper[5119]: E0121 09:54:45.361140 5119 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Jan 21 09:54:45 crc kubenswrapper[5119]: E0121 09:54:45.382749 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.525099 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.594011 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6a96bc82968554f4963af69b0650f31ed23187fc0611ce4a942afca605f17b35"} Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.594069 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"45ee6229ac063f9b16921094ea4fdace2a6f019642b83e58ffe364a6aa4c21d8"} Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.595084 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a" exitCode=0 Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.595144 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a"} Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.595160 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"930d9d425a870205004eeff655ad6a56e47b59926ee092b8073fe7bbbdcf674e"} Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.595279 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.595849 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.595872 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.595880 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:45 crc kubenswrapper[5119]: E0121 09:54:45.596023 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.597227 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.597510 5119 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="ede3e82ab49565f7370837e3f7d418accce5fa866740ed81a80c838eb4246732" exitCode=0 Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.597562 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"ede3e82ab49565f7370837e3f7d418accce5fa866740ed81a80c838eb4246732"} Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.597579 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"87afb3511bb8aa8067e6600879e7a10c2050058d57354ee2c0a7724ad8b78b81"} Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.597701 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.598134 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.598154 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.598164 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.598172 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.598194 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.598207 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:45 crc kubenswrapper[5119]: E0121 09:54:45.598321 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.600793 5119 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="b3d1aa5783e1b0d52d0d3713d8185e31dde20546e122dbde9b9b5260b3ae2f0c" exitCode=0 Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.600936 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"b3d1aa5783e1b0d52d0d3713d8185e31dde20546e122dbde9b9b5260b3ae2f0c"} Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.600976 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"cd899883716389538bc5d900d332ff8a852b3834adb24ae2d528d1d4ce00f228"} Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.601102 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.602016 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.602050 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.602063 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:45 crc kubenswrapper[5119]: E0121 09:54:45.602268 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.603380 5119 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="0d256e3818040b35be7b6695f5d3db72105283175c97210ab4812e3bc8b0c97f" exitCode=0 Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.603418 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"0d256e3818040b35be7b6695f5d3db72105283175c97210ab4812e3bc8b0c97f"} Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.603441 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"594719ef43b037a1e05f08edc6445230a7393deaa003596f36cee7e498bfb1c1"} Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.603538 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.604822 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.604868 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:45 crc kubenswrapper[5119]: I0121 09:54:45.604879 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:45 crc kubenswrapper[5119]: E0121 09:54:45.605444 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:45 crc kubenswrapper[5119]: E0121 09:54:45.799334 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:54:45 crc kubenswrapper[5119]: E0121 09:54:45.799411 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:54:45 crc kubenswrapper[5119]: E0121 09:54:45.946946 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Jan 21 09:54:46 crc kubenswrapper[5119]: E0121 09:54:46.027072 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.162129 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.163086 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.163129 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.163140 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.163162 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:54:46 crc kubenswrapper[5119]: E0121 09:54:46.164322 5119 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.607649 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"fa9e30fb91da794586d30c109e92837daa89933bdf20a8c7cb5319ebec439888"} Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.607804 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.608305 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.608334 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.608357 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:46 crc kubenswrapper[5119]: E0121 09:54:46.608522 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.611560 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"5cd6be5cbdd0fcbb91b735fb086052ec8f7c116137e8ffc5a7810ad3c41fd0b4"} Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.611617 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"470140ab3976c458184542c1dabf115e0e58166e29b3dea093441b4b8376e691"} Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.611633 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"02a66e8fce864e03922933d50d2f46c9723439d27f80c37cec85769c68188108"} Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.611761 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.613965 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.613999 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.614011 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:46 crc kubenswrapper[5119]: E0121 09:54:46.614185 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.618531 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"2cfee1d049b0b79a9237d6a7a8acc7d42f68c7fb71961b253dd251a788625d1a"} Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.618583 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8d4e8316a500188774f66047afdf6dae0216cd90571de582233f01e0280f3f46"} Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.618621 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"89bccdab80b7697a627e126f2d517e2574786d9422c5c931bd52d685687cd90e"} Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.618795 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.619648 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.619681 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.619693 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:46 crc kubenswrapper[5119]: E0121 09:54:46.619893 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.622807 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955"} Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.622833 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736"} Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.622842 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b"} Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.624390 5119 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="199845e27bc250344d296cb991fce2f8c77a2e2761012343e19c574acc1daa8d" exitCode=0 Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.624416 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"199845e27bc250344d296cb991fce2f8c77a2e2761012343e19c574acc1daa8d"} Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.624526 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.624952 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.624975 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.624984 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:46 crc kubenswrapper[5119]: E0121 09:54:46.625112 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.652676 5119 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 09:54:46 crc kubenswrapper[5119]: I0121 09:54:46.951424 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.628924 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e618c4b2bbe23b1682630f31e631d6b91a402f4e321be95db2902a717d1a143e"} Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.628980 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2"} Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.629152 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.629692 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.629720 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.629732 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:47 crc kubenswrapper[5119]: E0121 09:54:47.629955 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.631992 5119 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="2ddea5f3c72381bc4a6dbbbf368733bfd085a6d72265fe838150007a05987932" exitCode=0 Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.632161 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.632345 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"2ddea5f3c72381bc4a6dbbbf368733bfd085a6d72265fe838150007a05987932"} Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.632475 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.632999 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.633063 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.633098 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.633109 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.633068 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.633165 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:47 crc kubenswrapper[5119]: E0121 09:54:47.633342 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:47 crc kubenswrapper[5119]: E0121 09:54:47.633505 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.765253 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.766157 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.766203 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.766214 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.766239 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:54:47 crc kubenswrapper[5119]: I0121 09:54:47.775773 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.641989 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"390b2ff8810cc1420f748190d4eaa2e29232197cc53ac8be36ee13312ac3dda2"} Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.642385 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1e8064cabfb679b3604b92ee2d391248a40f938a1343e1d97413783f02c35748"} Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.642119 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.642411 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.642428 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"dbe65977c6e0db625c023229341e3b84c9173e51006bf22d745ad47a1432d421"} Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.642443 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"14b47c160ec5dffdee4357bbd382f1496b235b085c4fe443b01e73e9f877570c"} Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.642456 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2e9cad5d6f6cab73e81cdd223b3f6781b26a930f25412d08728fa06e8858d06b"} Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.642119 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.642564 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.643521 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.643549 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.643566 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.643521 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.643619 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.643579 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.643639 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.643641 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.643580 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:48 crc kubenswrapper[5119]: E0121 09:54:48.644086 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:48 crc kubenswrapper[5119]: E0121 09:54:48.644482 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:48 crc kubenswrapper[5119]: E0121 09:54:48.644657 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.766689 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:48 crc kubenswrapper[5119]: I0121 09:54:48.773764 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.643554 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.643639 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.643746 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.644332 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.644375 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.644389 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.644748 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.644777 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.644790 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:49 crc kubenswrapper[5119]: E0121 09:54:49.644846 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.645154 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.645199 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.645209 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:49 crc kubenswrapper[5119]: E0121 09:54:49.647993 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:49 crc kubenswrapper[5119]: E0121 09:54:49.648959 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.951787 5119 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.952083 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 21 09:54:49 crc kubenswrapper[5119]: I0121 09:54:49.968484 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.645876 5119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.645928 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.648235 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.648300 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.648313 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:50 crc kubenswrapper[5119]: E0121 09:54:50.648735 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.924586 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.924880 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.925744 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.925795 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.925812 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:50 crc kubenswrapper[5119]: E0121 09:54:50.926258 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.938923 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.939156 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.940012 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.940050 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:50 crc kubenswrapper[5119]: I0121 09:54:50.940060 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:50 crc kubenswrapper[5119]: E0121 09:54:50.940399 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:51 crc kubenswrapper[5119]: I0121 09:54:51.283232 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:54:51 crc kubenswrapper[5119]: I0121 09:54:51.283595 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:51 crc kubenswrapper[5119]: I0121 09:54:51.284713 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:51 crc kubenswrapper[5119]: I0121 09:54:51.284768 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:51 crc kubenswrapper[5119]: I0121 09:54:51.284778 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:51 crc kubenswrapper[5119]: E0121 09:54:51.285106 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:51 crc kubenswrapper[5119]: I0121 09:54:51.647813 5119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 09:54:51 crc kubenswrapper[5119]: I0121 09:54:51.647891 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:51 crc kubenswrapper[5119]: I0121 09:54:51.648542 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:51 crc kubenswrapper[5119]: I0121 09:54:51.648587 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:51 crc kubenswrapper[5119]: I0121 09:54:51.648682 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:51 crc kubenswrapper[5119]: E0121 09:54:51.649288 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:52 crc kubenswrapper[5119]: I0121 09:54:52.592903 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:54:52 crc kubenswrapper[5119]: I0121 09:54:52.650593 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:52 crc kubenswrapper[5119]: I0121 09:54:52.651394 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:52 crc kubenswrapper[5119]: I0121 09:54:52.651491 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:52 crc kubenswrapper[5119]: I0121 09:54:52.651511 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:52 crc kubenswrapper[5119]: E0121 09:54:52.652058 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:54 crc kubenswrapper[5119]: E0121 09:54:54.656462 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:54:56 crc kubenswrapper[5119]: I0121 09:54:56.526001 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 09:54:56 crc kubenswrapper[5119]: E0121 09:54:56.654958 5119 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 09:54:56 crc kubenswrapper[5119]: I0121 09:54:56.796483 5119 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 09:54:56 crc kubenswrapper[5119]: I0121 09:54:56.796541 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 09:54:56 crc kubenswrapper[5119]: I0121 09:54:56.830563 5119 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 09:54:56 crc kubenswrapper[5119]: I0121 09:54:56.830641 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 09:54:57 crc kubenswrapper[5119]: E0121 09:54:57.547534 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 21 09:54:58 crc kubenswrapper[5119]: I0121 09:54:58.519564 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 09:54:58 crc kubenswrapper[5119]: I0121 09:54:58.519862 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:58 crc kubenswrapper[5119]: I0121 09:54:58.520806 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:58 crc kubenswrapper[5119]: I0121 09:54:58.520888 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:58 crc kubenswrapper[5119]: I0121 09:54:58.520915 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:58 crc kubenswrapper[5119]: E0121 09:54:58.521564 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:58 crc kubenswrapper[5119]: I0121 09:54:58.546456 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 09:54:58 crc kubenswrapper[5119]: I0121 09:54:58.665976 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:58 crc kubenswrapper[5119]: I0121 09:54:58.666769 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:58 crc kubenswrapper[5119]: I0121 09:54:58.666832 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:58 crc kubenswrapper[5119]: I0121 09:54:58.666853 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:58 crc kubenswrapper[5119]: E0121 09:54:58.667559 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:58 crc kubenswrapper[5119]: I0121 09:54:58.679164 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 09:54:59 crc kubenswrapper[5119]: I0121 09:54:59.668098 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:54:59 crc kubenswrapper[5119]: I0121 09:54:59.669097 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:54:59 crc kubenswrapper[5119]: I0121 09:54:59.669166 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:54:59 crc kubenswrapper[5119]: I0121 09:54:59.669192 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:54:59 crc kubenswrapper[5119]: E0121 09:54:59.669932 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:54:59 crc kubenswrapper[5119]: I0121 09:54:59.952464 5119 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 09:54:59 crc kubenswrapper[5119]: I0121 09:54:59.952552 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 09:55:00 crc kubenswrapper[5119]: E0121 09:55:00.760771 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.940240 5119 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.940687 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.946900 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.947267 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.947784 5119 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.947860 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.948709 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.948737 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.948746 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:00 crc kubenswrapper[5119]: E0121 09:55:00.949016 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.953243 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.972463 5119 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 09:55:00 crc kubenswrapper[5119]: I0121 09:55:00.987589 5119 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.375440 5119 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.375506 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.673312 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.673756 5119 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.673858 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.673895 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.673921 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.673933 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.674186 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.824121 5119 trace.go:236] Trace[1302468863]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 09:54:47.610) (total time: 14213ms): Jan 21 09:55:01 crc kubenswrapper[5119]: Trace[1302468863]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 14213ms (09:55:01.824) Jan 21 09:55:01 crc kubenswrapper[5119]: Trace[1302468863]: [14.213423113s] [14.213423113s] END Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.824178 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.824124 5119 trace.go:236] Trace[536481911]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 09:54:47.698) (total time: 14125ms): Jan 21 09:55:01 crc kubenswrapper[5119]: Trace[536481911]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 14125ms (09:55:01.824) Jan 21 09:55:01 crc kubenswrapper[5119]: Trace[536481911]: [14.125252013s] [14.125252013s] END Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.824206 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.824247 5119 trace.go:236] Trace[1048660901]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 09:54:48.011) (total time: 13813ms): Jan 21 09:55:01 crc kubenswrapper[5119]: Trace[1048660901]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 13813ms (09:55:01.824) Jan 21 09:55:01 crc kubenswrapper[5119]: Trace[1048660901]: [13.813144518s] [13.813144518s] END Jan 21 09:55:01 crc kubenswrapper[5119]: I0121 09:55:01.824296 5119 trace.go:236] Trace[1161095569]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 09:54:49.005) (total time: 12818ms): Jan 21 09:55:01 crc kubenswrapper[5119]: Trace[1161095569]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 12818ms (09:55:01.824) Jan 21 09:55:01 crc kubenswrapper[5119]: Trace[1161095569]: [12.818769905s] [12.818769905s] END Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.824315 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.824299 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.824353 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661bc129866 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.533180518 +0000 UTC m=+0.201272196,LastTimestamp:2026-01-21 09:54:44.533180518 +0000 UTC m=+0.201272196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.827656 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.827689 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c06ff9f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606409208 +0000 UTC m=+0.274500876,LastTimestamp:2026-01-21 09:54:44.606409208 +0000 UTC m=+0.274500876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.832449 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c07054df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606432479 +0000 UTC m=+0.274524157,LastTimestamp:2026-01-21 09:54:44.606432479 +0000 UTC m=+0.274524157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.834630 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c080526d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.607480429 +0000 UTC m=+0.275572107,LastTimestamp:2026-01-21 09:54:44.607480429 +0000 UTC m=+0.275572107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.837307 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c4317d12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.669422866 +0000 UTC m=+0.337514554,LastTimestamp:2026-01-21 09:54:44.669422866 +0000 UTC m=+0.337514554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.840600 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c06ff9f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c06ff9f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606409208 +0000 UTC m=+0.274500876,LastTimestamp:2026-01-21 09:54:44.691262855 +0000 UTC m=+0.359354533,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.844761 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c07054df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c07054df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606432479 +0000 UTC m=+0.274524157,LastTimestamp:2026-01-21 09:54:44.691287565 +0000 UTC m=+0.359379243,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.849647 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c080526d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c080526d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.607480429 +0000 UTC m=+0.275572107,LastTimestamp:2026-01-21 09:54:44.691297655 +0000 UTC m=+0.359389333,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.860959 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c06ff9f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c06ff9f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606409208 +0000 UTC m=+0.274500876,LastTimestamp:2026-01-21 09:54:44.692526204 +0000 UTC m=+0.360617892,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.866967 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c07054df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c07054df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606432479 +0000 UTC m=+0.274524157,LastTimestamp:2026-01-21 09:54:44.692545234 +0000 UTC m=+0.360636922,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.874719 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c080526d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c080526d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.607480429 +0000 UTC m=+0.275572107,LastTimestamp:2026-01-21 09:54:44.692557574 +0000 UTC m=+0.360649262,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.881008 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c06ff9f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c06ff9f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606409208 +0000 UTC m=+0.274500876,LastTimestamp:2026-01-21 09:54:44.693373786 +0000 UTC m=+0.361465464,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.890022 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c07054df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c07054df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606432479 +0000 UTC m=+0.274524157,LastTimestamp:2026-01-21 09:54:44.693385357 +0000 UTC m=+0.361477035,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.895598 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c080526d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c080526d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.607480429 +0000 UTC m=+0.275572107,LastTimestamp:2026-01-21 09:54:44.693393957 +0000 UTC m=+0.361485625,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.906246 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c06ff9f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c06ff9f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606409208 +0000 UTC m=+0.274500876,LastTimestamp:2026-01-21 09:54:44.693968416 +0000 UTC m=+0.362060104,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.911622 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c07054df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c07054df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606432479 +0000 UTC m=+0.274524157,LastTimestamp:2026-01-21 09:54:44.693995837 +0000 UTC m=+0.362087535,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.916867 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c080526d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c080526d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.607480429 +0000 UTC m=+0.275572107,LastTimestamp:2026-01-21 09:54:44.694007917 +0000 UTC m=+0.362099615,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.922485 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c06ff9f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c06ff9f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606409208 +0000 UTC m=+0.274500876,LastTimestamp:2026-01-21 09:54:44.694562295 +0000 UTC m=+0.362653983,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.929350 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c07054df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c07054df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606432479 +0000 UTC m=+0.274524157,LastTimestamp:2026-01-21 09:54:44.694579676 +0000 UTC m=+0.362671374,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.935665 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c080526d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c080526d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.607480429 +0000 UTC m=+0.275572107,LastTimestamp:2026-01-21 09:54:44.694595156 +0000 UTC m=+0.362686844,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.941171 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c06ff9f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c06ff9f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606409208 +0000 UTC m=+0.274500876,LastTimestamp:2026-01-21 09:54:44.695485779 +0000 UTC m=+0.363577457,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.946898 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c07054df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c07054df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606432479 +0000 UTC m=+0.274524157,LastTimestamp:2026-01-21 09:54:44.69550539 +0000 UTC m=+0.363597068,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.951045 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c080526d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c080526d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.607480429 +0000 UTC m=+0.275572107,LastTimestamp:2026-01-21 09:54:44.69551584 +0000 UTC m=+0.363607518,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.954963 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c06ff9f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c06ff9f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606409208 +0000 UTC m=+0.274500876,LastTimestamp:2026-01-21 09:54:44.695585031 +0000 UTC m=+0.363676709,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.959559 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb661c07054df\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb661c07054df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:44.606432479 +0000 UTC m=+0.274524157,LastTimestamp:2026-01-21 09:54:44.695611501 +0000 UTC m=+0.363703179,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.964090 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb661db5187ca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.05739873 +0000 UTC m=+0.725490418,LastTimestamp:2026-01-21 09:54:45.05739873 +0000 UTC m=+0.725490418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.968421 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb661db6aa828 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.059045416 +0000 UTC m=+0.727137114,LastTimestamp:2026-01-21 09:54:45.059045416 +0000 UTC m=+0.727137114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.972935 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb661dc55a830 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.074446384 +0000 UTC m=+0.742538082,LastTimestamp:2026-01-21 09:54:45.074446384 +0000 UTC m=+0.742538082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.976695 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb661df180410 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.12073832 +0000 UTC m=+0.788829998,LastTimestamp:2026-01-21 09:54:45.12073832 +0000 UTC m=+0.788829998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.980858 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb661df356d2b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.122665771 +0000 UTC m=+0.790757459,LastTimestamp:2026-01-21 09:54:45.122665771 +0000 UTC m=+0.790757459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.985483 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb661f7215049 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.524000841 +0000 UTC m=+1.192092519,LastTimestamp:2026-01-21 09:54:45.524000841 +0000 UTC m=+1.192092519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.990342 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb661f7376881 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.525448833 +0000 UTC m=+1.193540511,LastTimestamp:2026-01-21 09:54:45.525448833 +0000 UTC m=+1.193540511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:01 crc kubenswrapper[5119]: E0121 09:55:01.995274 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb661f73890d7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.525524695 +0000 UTC m=+1.193616373,LastTimestamp:2026-01-21 09:54:45.525524695 +0000 UTC m=+1.193616373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.000097 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb661f7412764 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.526087524 +0000 UTC m=+1.194179202,LastTimestamp:2026-01-21 09:54:45.526087524 +0000 UTC m=+1.194179202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.004369 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb661f74c788b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.526829195 +0000 UTC m=+1.194920873,LastTimestamp:2026-01-21 09:54:45.526829195 +0000 UTC m=+1.194920873,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.009111 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb661f7c43408 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.534675976 +0000 UTC m=+1.202767654,LastTimestamp:2026-01-21 09:54:45.534675976 +0000 UTC m=+1.202767654,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.013117 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb661f7d1cffe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.53556787 +0000 UTC m=+1.203659548,LastTimestamp:2026-01-21 09:54:45.53556787 +0000 UTC m=+1.203659548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.017862 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb661f7d80e56 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.535977046 +0000 UTC m=+1.204068724,LastTimestamp:2026-01-21 09:54:45.535977046 +0000 UTC m=+1.204068724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.022487 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb661f81ae030 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.540356144 +0000 UTC m=+1.208447822,LastTimestamp:2026-01-21 09:54:45.540356144 +0000 UTC m=+1.208447822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.027114 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb661f81cecec openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.540490476 +0000 UTC m=+1.208582154,LastTimestamp:2026-01-21 09:54:45.540490476 +0000 UTC m=+1.208582154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.031779 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb661f822631c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.540848412 +0000 UTC m=+1.208940090,LastTimestamp:2026-01-21 09:54:45.540848412 +0000 UTC m=+1.208940090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.036521 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb661fb7c0720 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.597054752 +0000 UTC m=+1.265146430,LastTimestamp:2026-01-21 09:54:45.597054752 +0000 UTC m=+1.265146430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.043940 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb661fb9dc54e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.599266126 +0000 UTC m=+1.267357804,LastTimestamp:2026-01-21 09:54:45.599266126 +0000 UTC m=+1.267357804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.049501 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb661fbe611ae openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.60400427 +0000 UTC m=+1.272095948,LastTimestamp:2026-01-21 09:54:45.60400427 +0000 UTC m=+1.272095948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.053884 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb661fc0a88f0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.606394096 +0000 UTC m=+1.274485774,LastTimestamp:2026-01-21 09:54:45.606394096 +0000 UTC m=+1.274485774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.058422 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb6620a143b35 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.841910581 +0000 UTC m=+1.510002259,LastTimestamp:2026-01-21 09:54:45.841910581 +0000 UTC m=+1.510002259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.062561 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb6620a143b67 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.841910631 +0000 UTC m=+1.510002299,LastTimestamp:2026-01-21 09:54:45.841910631 +0000 UTC m=+1.510002299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.073041 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6620a14eb2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.841955631 +0000 UTC m=+1.510047309,LastTimestamp:2026-01-21 09:54:45.841955631 +0000 UTC m=+1.510047309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.079227 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb6620a213568 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.842761064 +0000 UTC m=+1.510852742,LastTimestamp:2026-01-21 09:54:45.842761064 +0000 UTC m=+1.510852742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.084356 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb6620a21e51d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.842806045 +0000 UTC m=+1.510897723,LastTimestamp:2026-01-21 09:54:45.842806045 +0000 UTC m=+1.510897723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.088905 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb6620ab30bbc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.852318652 +0000 UTC m=+1.520410330,LastTimestamp:2026-01-21 09:54:45.852318652 +0000 UTC m=+1.520410330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.092530 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb6620ac97805 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.853788165 +0000 UTC m=+1.521879843,LastTimestamp:2026-01-21 09:54:45.853788165 +0000 UTC m=+1.521879843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.097125 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb6620b09e5ac openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.85801054 +0000 UTC m=+1.526102218,LastTimestamp:2026-01-21 09:54:45.85801054 +0000 UTC m=+1.526102218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.101154 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb6620b1bb759 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.859178329 +0000 UTC m=+1.527270007,LastTimestamp:2026-01-21 09:54:45.859178329 +0000 UTC m=+1.527270007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.104904 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb6620b1e391d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.859342621 +0000 UTC m=+1.527434299,LastTimestamp:2026-01-21 09:54:45.859342621 +0000 UTC m=+1.527434299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.108171 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6620b2f0a92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.860444818 +0000 UTC m=+1.528536496,LastTimestamp:2026-01-21 09:54:45.860444818 +0000 UTC m=+1.528536496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.112439 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb6620b3a6d57 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.861190999 +0000 UTC m=+1.529282677,LastTimestamp:2026-01-21 09:54:45.861190999 +0000 UTC m=+1.529282677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.117209 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6620b47faed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:45.862079213 +0000 UTC m=+1.530170891,LastTimestamp:2026-01-21 09:54:45.862079213 +0000 UTC m=+1.530170891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.122011 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb662187e263e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.083733054 +0000 UTC m=+1.751824732,LastTimestamp:2026-01-21 09:54:46.083733054 +0000 UTC m=+1.751824732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.126157 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb6621974ac98 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.099889304 +0000 UTC m=+1.767980982,LastTimestamp:2026-01-21 09:54:46.099889304 +0000 UTC m=+1.767980982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.130424 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb662198f23b2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.10162373 +0000 UTC m=+1.769715408,LastTimestamp:2026-01-21 09:54:46.10162373 +0000 UTC m=+1.769715408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.139069 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb66227c948e4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.340315364 +0000 UTC m=+2.008407042,LastTimestamp:2026-01-21 09:54:46.340315364 +0000 UTC m=+2.008407042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.145445 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb66227dbe421 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.341534753 +0000 UTC m=+2.009626431,LastTimestamp:2026-01-21 09:54:46.341534753 +0000 UTC m=+2.009626431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.151120 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb662281f71fd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.345961981 +0000 UTC m=+2.014053659,LastTimestamp:2026-01-21 09:54:46.345961981 +0000 UTC m=+2.014053659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.154925 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb662284f5567 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.349100391 +0000 UTC m=+2.017192069,LastTimestamp:2026-01-21 09:54:46.349100391 +0000 UTC m=+2.017192069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.158857 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb66228c0c697 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.356534935 +0000 UTC m=+2.024626613,LastTimestamp:2026-01-21 09:54:46.356534935 +0000 UTC m=+2.024626613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.163203 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb66228d0ea1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.357592602 +0000 UTC m=+2.025684280,LastTimestamp:2026-01-21 09:54:46.357592602 +0000 UTC m=+2.025684280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.183479 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb66228d1caad openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.357650093 +0000 UTC m=+2.025741771,LastTimestamp:2026-01-21 09:54:46.357650093 +0000 UTC m=+2.025741771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.188055 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb66228deb0d6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.358495446 +0000 UTC m=+2.026587124,LastTimestamp:2026-01-21 09:54:46.358495446 +0000 UTC m=+2.026587124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.192524 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6623455c2a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.550848163 +0000 UTC m=+2.218939841,LastTimestamp:2026-01-21 09:54:46.550848163 +0000 UTC m=+2.218939841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.199159 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb662347452e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.552851173 +0000 UTC m=+2.220942851,LastTimestamp:2026-01-21 09:54:46.552851173 +0000 UTC m=+2.220942851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.203789 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb66234d4f8a6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.559185062 +0000 UTC m=+2.227276740,LastTimestamp:2026-01-21 09:54:46.559185062 +0000 UTC m=+2.227276740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.207652 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb66234e51138 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.560239928 +0000 UTC m=+2.228331596,LastTimestamp:2026-01-21 09:54:46.560239928 +0000 UTC m=+2.228331596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.212457 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb66234e81471 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.560437361 +0000 UTC m=+2.228529039,LastTimestamp:2026-01-21 09:54:46.560437361 +0000 UTC m=+2.228529039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.217311 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb66238d2597b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.626122107 +0000 UTC m=+2.294213785,LastTimestamp:2026-01-21 09:54:46.626122107 +0000 UTC m=+2.294213785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.221505 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6623f9ac3fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.739919868 +0000 UTC m=+2.408011546,LastTimestamp:2026-01-21 09:54:46.739919868 +0000 UTC m=+2.408011546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.225659 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb662405f3c20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.75279568 +0000 UTC m=+2.420887358,LastTimestamp:2026-01-21 09:54:46.75279568 +0000 UTC m=+2.420887358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.231013 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb662408d0b37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.755797815 +0000 UTC m=+2.423889493,LastTimestamp:2026-01-21 09:54:46.755797815 +0000 UTC m=+2.423889493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.234976 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb66244eeb014 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.829305876 +0000 UTC m=+2.497397544,LastTimestamp:2026-01-21 09:54:46.829305876 +0000 UTC m=+2.497397544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.239352 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb66245c31b5f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.843226975 +0000 UTC m=+2.511318653,LastTimestamp:2026-01-21 09:54:46.843226975 +0000 UTC m=+2.511318653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.243809 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6624bef7f34 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.946799412 +0000 UTC m=+2.614891090,LastTimestamp:2026-01-21 09:54:46.946799412 +0000 UTC m=+2.614891090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.246148 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6624c792083 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.955819139 +0000 UTC m=+2.623910837,LastTimestamp:2026-01-21 09:54:46.955819139 +0000 UTC m=+2.623910837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.248791 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb66274ef0e1a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:47.634636314 +0000 UTC m=+3.302728002,LastTimestamp:2026-01-21 09:54:47.634636314 +0000 UTC m=+3.302728002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.250701 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb6627de60ddb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:47.785041371 +0000 UTC m=+3.453133049,LastTimestamp:2026-01-21 09:54:47.785041371 +0000 UTC m=+3.453133049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.252938 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb6627ebfb81e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:47.79930627 +0000 UTC m=+3.467397948,LastTimestamp:2026-01-21 09:54:47.79930627 +0000 UTC m=+3.467397948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.255530 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb6627ed022ed openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:47.800382189 +0000 UTC m=+3.468473867,LastTimestamp:2026-01-21 09:54:47.800382189 +0000 UTC m=+3.468473867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.256816 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb66287b59a7c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:47.949638268 +0000 UTC m=+3.617729946,LastTimestamp:2026-01-21 09:54:47.949638268 +0000 UTC m=+3.617729946,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.259894 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb662883842df openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:47.958201055 +0000 UTC m=+3.626292733,LastTimestamp:2026-01-21 09:54:47.958201055 +0000 UTC m=+3.626292733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.260642 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb6628845adc0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:47.959080384 +0000 UTC m=+3.627172062,LastTimestamp:2026-01-21 09:54:47.959080384 +0000 UTC m=+3.627172062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.263970 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb66291d5bf63 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:48.119517027 +0000 UTC m=+3.787608705,LastTimestamp:2026-01-21 09:54:48.119517027 +0000 UTC m=+3.787608705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.267528 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb6629270bfad openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:48.129675181 +0000 UTC m=+3.797766859,LastTimestamp:2026-01-21 09:54:48.129675181 +0000 UTC m=+3.797766859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.270681 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb662927db0cc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:48.13052334 +0000 UTC m=+3.798615008,LastTimestamp:2026-01-21 09:54:48.13052334 +0000 UTC m=+3.798615008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.274022 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb6629cb2b78d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:48.301770637 +0000 UTC m=+3.969862315,LastTimestamp:2026-01-21 09:54:48.301770637 +0000 UTC m=+3.969862315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.277663 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb6629d61a52a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:48.31323473 +0000 UTC m=+3.981326408,LastTimestamp:2026-01-21 09:54:48.31323473 +0000 UTC m=+3.981326408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.281201 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb6629d72d758 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:48.314361688 +0000 UTC m=+3.982453366,LastTimestamp:2026-01-21 09:54:48.314361688 +0000 UTC m=+3.982453366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.285110 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb662a7db2bf1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:48.488971249 +0000 UTC m=+4.157062917,LastTimestamp:2026-01-21 09:54:48.488971249 +0000 UTC m=+4.157062917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.288655 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb662a8684e7b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:48.498220667 +0000 UTC m=+4.166312335,LastTimestamp:2026-01-21 09:54:48.498220667 +0000 UTC m=+4.166312335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.293150 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 21 09:55:02 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-controller-manager-crc.188cb662ff10037d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 21 09:55:02 crc kubenswrapper[5119]: body: Jan 21 09:55:02 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:49.952052093 +0000 UTC m=+5.620143781,LastTimestamp:2026-01-21 09:54:49.952052093 +0000 UTC m=+5.620143781,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:55:02 crc kubenswrapper[5119]: > Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.296787 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb662ff12012c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:49.952182572 +0000 UTC m=+5.620274260,LastTimestamp:2026-01-21 09:54:49.952182572 +0000 UTC m=+5.620274260,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.301847 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 09:55:02 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-apiserver-crc.188cb6649706560c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 21 09:55:02 crc kubenswrapper[5119]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 09:55:02 crc kubenswrapper[5119]: Jan 21 09:55:02 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:56.796521996 +0000 UTC m=+12.464613674,LastTimestamp:2026-01-21 09:54:56.796521996 +0000 UTC m=+12.464613674,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:55:02 crc kubenswrapper[5119]: > Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.306827 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6649706eb09 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:56.796560137 +0000 UTC m=+12.464651815,LastTimestamp:2026-01-21 09:54:56.796560137 +0000 UTC m=+12.464651815,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.311323 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6649706560c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 09:55:02 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-apiserver-crc.188cb6649706560c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 21 09:55:02 crc kubenswrapper[5119]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 09:55:02 crc kubenswrapper[5119]: Jan 21 09:55:02 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:56.796521996 +0000 UTC m=+12.464613674,LastTimestamp:2026-01-21 09:54:56.830617928 +0000 UTC m=+12.498709606,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:55:02 crc kubenswrapper[5119]: > Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.314901 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6649706eb09\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6649706eb09 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:56.796560137 +0000 UTC m=+12.464651815,LastTimestamp:2026-01-21 09:54:56.830658809 +0000 UTC m=+12.498750487,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.319465 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 21 09:55:02 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-controller-manager-crc.188cb66553230dd8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 21 09:55:02 crc kubenswrapper[5119]: body: Jan 21 09:55:02 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:59.952520664 +0000 UTC m=+15.620612342,LastTimestamp:2026-01-21 09:54:59.952520664 +0000 UTC m=+15.620612342,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:55:02 crc kubenswrapper[5119]: > Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.323137 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb6655323de5d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:59.952574045 +0000 UTC m=+15.620665723,LastTimestamp:2026-01-21 09:54:59.952574045 +0000 UTC m=+15.620665723,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.327192 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 09:55:02 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-apiserver-crc.188cb6658e03b68d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 21 09:55:02 crc kubenswrapper[5119]: body: Jan 21 09:55:02 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:00.940322445 +0000 UTC m=+16.608414163,LastTimestamp:2026-01-21 09:55:00.940322445 +0000 UTC m=+16.608414163,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:55:02 crc kubenswrapper[5119]: > Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.331359 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6658e0a1cd6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:00.940741846 +0000 UTC m=+16.608833564,LastTimestamp:2026-01-21 09:55:00.940741846 +0000 UTC m=+16.608833564,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.335920 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6658e03b68d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 09:55:02 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-apiserver-crc.188cb6658e03b68d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 21 09:55:02 crc kubenswrapper[5119]: body: Jan 21 09:55:02 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:00.940322445 +0000 UTC m=+16.608414163,LastTimestamp:2026-01-21 09:55:00.947833076 +0000 UTC m=+16.615924794,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:55:02 crc kubenswrapper[5119]: > Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.339820 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6658e0a1cd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6658e0a1cd6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:00.940741846 +0000 UTC m=+16.608833564,LastTimestamp:2026-01-21 09:55:00.947894188 +0000 UTC m=+16.615985906,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.344909 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 09:55:02 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-apiserver-crc.188cb665a7f3c5ff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 21 09:55:02 crc kubenswrapper[5119]: body: Jan 21 09:55:02 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:01.375485439 +0000 UTC m=+17.043577127,LastTimestamp:2026-01-21 09:55:01.375485439 +0000 UTC m=+17.043577127,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:55:02 crc kubenswrapper[5119]: > Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.353149 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb665a7f47bcc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:01.37553198 +0000 UTC m=+17.043623668,LastTimestamp:2026-01-21 09:55:01.37553198 +0000 UTC m=+17.043623668,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.358172 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6658e03b68d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 09:55:02 crc kubenswrapper[5119]: &Event{ObjectMeta:{kube-apiserver-crc.188cb6658e03b68d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 21 09:55:02 crc kubenswrapper[5119]: body: Jan 21 09:55:02 crc kubenswrapper[5119]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:00.940322445 +0000 UTC m=+16.608414163,LastTimestamp:2026-01-21 09:55:01.673831173 +0000 UTC m=+17.341922861,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:55:02 crc kubenswrapper[5119]: > Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.363964 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6658e0a1cd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6658e0a1cd6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:00.940741846 +0000 UTC m=+16.608833564,LastTimestamp:2026-01-21 09:55:01.673889774 +0000 UTC m=+17.341981482,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.533240 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.656812 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.657059 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.658018 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.658091 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.658111 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.658709 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.677307 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.679444 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e618c4b2bbe23b1682630f31e631d6b91a402f4e321be95db2902a717d1a143e" exitCode=255 Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.679515 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e618c4b2bbe23b1682630f31e631d6b91a402f4e321be95db2902a717d1a143e"} Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.679760 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.680342 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.680417 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.680431 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.680823 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:02 crc kubenswrapper[5119]: I0121 09:55:02.681091 5119 scope.go:117] "RemoveContainer" containerID="e618c4b2bbe23b1682630f31e631d6b91a402f4e321be95db2902a717d1a143e" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.687674 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb662408d0b37\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb662408d0b37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.755797815 +0000 UTC m=+2.423889493,LastTimestamp:2026-01-21 09:55:02.682161514 +0000 UTC m=+18.350253202,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.889875 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6624bef7f34\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6624bef7f34 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.946799412 +0000 UTC m=+2.614891090,LastTimestamp:2026-01-21 09:55:02.884678042 +0000 UTC m=+18.552769720,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:02 crc kubenswrapper[5119]: E0121 09:55:02.898432 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6624c792083\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6624c792083 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.955819139 +0000 UTC m=+2.623910837,LastTimestamp:2026-01-21 09:55:02.893734434 +0000 UTC m=+18.561826112,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:03 crc kubenswrapper[5119]: I0121 09:55:03.529949 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:03 crc kubenswrapper[5119]: I0121 09:55:03.683162 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 09:55:03 crc kubenswrapper[5119]: I0121 09:55:03.684597 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4a4e91dcb091c5a2a7e9fcbb3342f27f9fc55a37c301d5f129dc82534a492f8c"} Jan 21 09:55:03 crc kubenswrapper[5119]: I0121 09:55:03.684874 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:03 crc kubenswrapper[5119]: I0121 09:55:03.685493 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:03 crc kubenswrapper[5119]: I0121 09:55:03.685525 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:03 crc kubenswrapper[5119]: I0121 09:55:03.685535 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:03 crc kubenswrapper[5119]: E0121 09:55:03.685838 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:04 crc kubenswrapper[5119]: I0121 09:55:04.531433 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:04 crc kubenswrapper[5119]: E0121 09:55:04.656743 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:55:04 crc kubenswrapper[5119]: I0121 09:55:04.688151 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 09:55:04 crc kubenswrapper[5119]: I0121 09:55:04.688739 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 09:55:04 crc kubenswrapper[5119]: I0121 09:55:04.690193 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4a4e91dcb091c5a2a7e9fcbb3342f27f9fc55a37c301d5f129dc82534a492f8c" exitCode=255 Jan 21 09:55:04 crc kubenswrapper[5119]: I0121 09:55:04.690240 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4a4e91dcb091c5a2a7e9fcbb3342f27f9fc55a37c301d5f129dc82534a492f8c"} Jan 21 09:55:04 crc kubenswrapper[5119]: I0121 09:55:04.690292 5119 scope.go:117] "RemoveContainer" containerID="e618c4b2bbe23b1682630f31e631d6b91a402f4e321be95db2902a717d1a143e" Jan 21 09:55:04 crc kubenswrapper[5119]: I0121 09:55:04.690463 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:04 crc kubenswrapper[5119]: I0121 09:55:04.690998 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:04 crc kubenswrapper[5119]: I0121 09:55:04.691108 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:04 crc kubenswrapper[5119]: I0121 09:55:04.691218 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:04 crc kubenswrapper[5119]: E0121 09:55:04.691920 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:04 crc kubenswrapper[5119]: I0121 09:55:04.692172 5119 scope.go:117] "RemoveContainer" containerID="4a4e91dcb091c5a2a7e9fcbb3342f27f9fc55a37c301d5f129dc82534a492f8c" Jan 21 09:55:04 crc kubenswrapper[5119]: E0121 09:55:04.692440 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:55:04 crc kubenswrapper[5119]: E0121 09:55:04.698229 5119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6666da7f168 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:04.692404584 +0000 UTC m=+20.360496262,LastTimestamp:2026-01-21 09:55:04.692404584 +0000 UTC m=+20.360496262,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:05 crc kubenswrapper[5119]: I0121 09:55:05.028213 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:05 crc kubenswrapper[5119]: I0121 09:55:05.029263 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:05 crc kubenswrapper[5119]: I0121 09:55:05.029317 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:05 crc kubenswrapper[5119]: I0121 09:55:05.029327 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:05 crc kubenswrapper[5119]: I0121 09:55:05.029354 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:55:05 crc kubenswrapper[5119]: E0121 09:55:05.038839 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:55:05 crc kubenswrapper[5119]: E0121 09:55:05.083461 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:55:05 crc kubenswrapper[5119]: I0121 09:55:05.530666 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:05 crc kubenswrapper[5119]: E0121 09:55:05.537201 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:55:05 crc kubenswrapper[5119]: E0121 09:55:05.667553 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:55:05 crc kubenswrapper[5119]: I0121 09:55:05.693854 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 09:55:06 crc kubenswrapper[5119]: I0121 09:55:06.529105 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:06 crc kubenswrapper[5119]: I0121 09:55:06.956031 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:55:06 crc kubenswrapper[5119]: I0121 09:55:06.956223 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:06 crc kubenswrapper[5119]: I0121 09:55:06.956888 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:06 crc kubenswrapper[5119]: I0121 09:55:06.956919 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:06 crc kubenswrapper[5119]: I0121 09:55:06.956932 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:06 crc kubenswrapper[5119]: E0121 09:55:06.957225 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:06 crc kubenswrapper[5119]: I0121 09:55:06.960567 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:55:07 crc kubenswrapper[5119]: E0121 09:55:07.167286 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:55:07 crc kubenswrapper[5119]: E0121 09:55:07.497948 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:55:07 crc kubenswrapper[5119]: I0121 09:55:07.532145 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:07 crc kubenswrapper[5119]: I0121 09:55:07.699883 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:07 crc kubenswrapper[5119]: I0121 09:55:07.701011 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:07 crc kubenswrapper[5119]: I0121 09:55:07.701071 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:07 crc kubenswrapper[5119]: I0121 09:55:07.701091 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:07 crc kubenswrapper[5119]: E0121 09:55:07.701557 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:08 crc kubenswrapper[5119]: I0121 09:55:08.534408 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:09 crc kubenswrapper[5119]: I0121 09:55:09.529350 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:10 crc kubenswrapper[5119]: I0121 09:55:10.530445 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.374866 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.375426 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.376456 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.376542 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.376565 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:11 crc kubenswrapper[5119]: E0121 09:55:11.377177 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.377797 5119 scope.go:117] "RemoveContainer" containerID="4a4e91dcb091c5a2a7e9fcbb3342f27f9fc55a37c301d5f129dc82534a492f8c" Jan 21 09:55:11 crc kubenswrapper[5119]: E0121 09:55:11.378149 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:55:11 crc kubenswrapper[5119]: E0121 09:55:11.383897 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6666da7f168\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6666da7f168 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:04.692404584 +0000 UTC m=+20.360496262,LastTimestamp:2026-01-21 09:55:11.378093127 +0000 UTC m=+27.046184845,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.439238 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.440791 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.440856 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.440922 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.440956 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:55:11 crc kubenswrapper[5119]: E0121 09:55:11.454852 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:55:11 crc kubenswrapper[5119]: I0121 09:55:11.529830 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:12 crc kubenswrapper[5119]: E0121 09:55:12.135177 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:55:12 crc kubenswrapper[5119]: I0121 09:55:12.533383 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:13 crc kubenswrapper[5119]: I0121 09:55:13.529300 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:13 crc kubenswrapper[5119]: I0121 09:55:13.685240 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:55:13 crc kubenswrapper[5119]: I0121 09:55:13.685482 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:13 crc kubenswrapper[5119]: I0121 09:55:13.686256 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:13 crc kubenswrapper[5119]: I0121 09:55:13.686373 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:13 crc kubenswrapper[5119]: I0121 09:55:13.686441 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:13 crc kubenswrapper[5119]: E0121 09:55:13.686770 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:13 crc kubenswrapper[5119]: I0121 09:55:13.687060 5119 scope.go:117] "RemoveContainer" containerID="4a4e91dcb091c5a2a7e9fcbb3342f27f9fc55a37c301d5f129dc82534a492f8c" Jan 21 09:55:13 crc kubenswrapper[5119]: E0121 09:55:13.687293 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:55:13 crc kubenswrapper[5119]: E0121 09:55:13.693539 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6666da7f168\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6666da7f168 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:04.692404584 +0000 UTC m=+20.360496262,LastTimestamp:2026-01-21 09:55:13.687267835 +0000 UTC m=+29.355359513,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:14 crc kubenswrapper[5119]: E0121 09:55:14.173400 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:55:14 crc kubenswrapper[5119]: E0121 09:55:14.323449 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:55:14 crc kubenswrapper[5119]: I0121 09:55:14.530942 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:14 crc kubenswrapper[5119]: E0121 09:55:14.657478 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:55:15 crc kubenswrapper[5119]: I0121 09:55:15.527879 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:15 crc kubenswrapper[5119]: E0121 09:55:15.775067 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:55:16 crc kubenswrapper[5119]: I0121 09:55:16.531875 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:17 crc kubenswrapper[5119]: E0121 09:55:17.523834 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:55:17 crc kubenswrapper[5119]: I0121 09:55:17.529889 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:18 crc kubenswrapper[5119]: I0121 09:55:18.455152 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:18 crc kubenswrapper[5119]: I0121 09:55:18.456031 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:18 crc kubenswrapper[5119]: I0121 09:55:18.456099 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:18 crc kubenswrapper[5119]: I0121 09:55:18.456118 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:18 crc kubenswrapper[5119]: I0121 09:55:18.456190 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:55:18 crc kubenswrapper[5119]: E0121 09:55:18.468194 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:55:18 crc kubenswrapper[5119]: I0121 09:55:18.531782 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:19 crc kubenswrapper[5119]: I0121 09:55:19.533647 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:20 crc kubenswrapper[5119]: I0121 09:55:20.533712 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:21 crc kubenswrapper[5119]: E0121 09:55:21.182938 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:55:21 crc kubenswrapper[5119]: I0121 09:55:21.532331 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:22 crc kubenswrapper[5119]: I0121 09:55:22.532179 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:23 crc kubenswrapper[5119]: I0121 09:55:23.533571 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:24 crc kubenswrapper[5119]: I0121 09:55:24.533237 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:24 crc kubenswrapper[5119]: E0121 09:55:24.658975 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:55:25 crc kubenswrapper[5119]: I0121 09:55:25.468739 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:25 crc kubenswrapper[5119]: I0121 09:55:25.470295 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:25 crc kubenswrapper[5119]: I0121 09:55:25.470525 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:25 crc kubenswrapper[5119]: I0121 09:55:25.470767 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:25 crc kubenswrapper[5119]: I0121 09:55:25.470954 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:55:25 crc kubenswrapper[5119]: E0121 09:55:25.487104 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:55:25 crc kubenswrapper[5119]: I0121 09:55:25.534000 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:26 crc kubenswrapper[5119]: I0121 09:55:26.532457 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:27 crc kubenswrapper[5119]: I0121 09:55:27.532746 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:28 crc kubenswrapper[5119]: E0121 09:55:28.191377 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:55:28 crc kubenswrapper[5119]: I0121 09:55:28.533058 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:28 crc kubenswrapper[5119]: I0121 09:55:28.591048 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:28 crc kubenswrapper[5119]: I0121 09:55:28.592552 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:28 crc kubenswrapper[5119]: I0121 09:55:28.592648 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:28 crc kubenswrapper[5119]: I0121 09:55:28.592662 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:28 crc kubenswrapper[5119]: E0121 09:55:28.593205 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:28 crc kubenswrapper[5119]: I0121 09:55:28.593590 5119 scope.go:117] "RemoveContainer" containerID="4a4e91dcb091c5a2a7e9fcbb3342f27f9fc55a37c301d5f129dc82534a492f8c" Jan 21 09:55:28 crc kubenswrapper[5119]: E0121 09:55:28.604382 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb662408d0b37\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb662408d0b37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.755797815 +0000 UTC m=+2.423889493,LastTimestamp:2026-01-21 09:55:28.595297829 +0000 UTC m=+44.263389507,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:29 crc kubenswrapper[5119]: E0121 09:55:29.162632 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:55:29 crc kubenswrapper[5119]: I0121 09:55:29.533655 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:29 crc kubenswrapper[5119]: E0121 09:55:29.820397 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6624bef7f34\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6624bef7f34 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.946799412 +0000 UTC m=+2.614891090,LastTimestamp:2026-01-21 09:55:29.8162823 +0000 UTC m=+45.484373968,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:29 crc kubenswrapper[5119]: E0121 09:55:29.834495 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6624c792083\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6624c792083 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:54:46.955819139 +0000 UTC m=+2.623910837,LastTimestamp:2026-01-21 09:55:29.830302535 +0000 UTC m=+45.498394213,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:30 crc kubenswrapper[5119]: I0121 09:55:30.529640 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:30 crc kubenswrapper[5119]: I0121 09:55:30.760832 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 09:55:30 crc kubenswrapper[5119]: I0121 09:55:30.764221 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c811334bae3827591740f0dae0b6af099e6b9574913f927c10c8480d23d8fedd"} Jan 21 09:55:30 crc kubenswrapper[5119]: I0121 09:55:30.764773 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:30 crc kubenswrapper[5119]: I0121 09:55:30.765848 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:30 crc kubenswrapper[5119]: I0121 09:55:30.766056 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:30 crc kubenswrapper[5119]: I0121 09:55:30.766429 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:30 crc kubenswrapper[5119]: E0121 09:55:30.767297 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.292644 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.292838 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.294330 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.294535 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.294644 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:31 crc kubenswrapper[5119]: E0121 09:55:31.295267 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.534062 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.770067 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.771725 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.775580 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c811334bae3827591740f0dae0b6af099e6b9574913f927c10c8480d23d8fedd" exitCode=255 Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.775857 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c811334bae3827591740f0dae0b6af099e6b9574913f927c10c8480d23d8fedd"} Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.776088 5119 scope.go:117] "RemoveContainer" containerID="4a4e91dcb091c5a2a7e9fcbb3342f27f9fc55a37c301d5f129dc82534a492f8c" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.776427 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.778063 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.778124 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.778148 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:31 crc kubenswrapper[5119]: E0121 09:55:31.778967 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:31 crc kubenswrapper[5119]: I0121 09:55:31.779698 5119 scope.go:117] "RemoveContainer" containerID="c811334bae3827591740f0dae0b6af099e6b9574913f927c10c8480d23d8fedd" Jan 21 09:55:31 crc kubenswrapper[5119]: E0121 09:55:31.780170 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:55:31 crc kubenswrapper[5119]: E0121 09:55:31.790640 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6666da7f168\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6666da7f168 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:04.692404584 +0000 UTC m=+20.360496262,LastTimestamp:2026-01-21 09:55:31.780101288 +0000 UTC m=+47.448193006,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:32 crc kubenswrapper[5119]: I0121 09:55:32.487906 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:32 crc kubenswrapper[5119]: I0121 09:55:32.489211 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:32 crc kubenswrapper[5119]: I0121 09:55:32.489283 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:32 crc kubenswrapper[5119]: I0121 09:55:32.489309 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:32 crc kubenswrapper[5119]: I0121 09:55:32.489352 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:55:32 crc kubenswrapper[5119]: E0121 09:55:32.504462 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:55:32 crc kubenswrapper[5119]: I0121 09:55:32.530576 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:32 crc kubenswrapper[5119]: I0121 09:55:32.780763 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 09:55:33 crc kubenswrapper[5119]: I0121 09:55:33.532175 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:34 crc kubenswrapper[5119]: I0121 09:55:34.533163 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:34 crc kubenswrapper[5119]: E0121 09:55:34.660251 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:55:35 crc kubenswrapper[5119]: E0121 09:55:35.198314 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:55:35 crc kubenswrapper[5119]: I0121 09:55:35.533407 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:36 crc kubenswrapper[5119]: E0121 09:55:36.458530 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:55:36 crc kubenswrapper[5119]: I0121 09:55:36.532656 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:37 crc kubenswrapper[5119]: I0121 09:55:37.530382 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:38 crc kubenswrapper[5119]: E0121 09:55:38.375471 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:55:38 crc kubenswrapper[5119]: I0121 09:55:38.531992 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:39 crc kubenswrapper[5119]: I0121 09:55:39.505654 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:39 crc kubenswrapper[5119]: I0121 09:55:39.507180 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:39 crc kubenswrapper[5119]: I0121 09:55:39.507264 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:39 crc kubenswrapper[5119]: I0121 09:55:39.507297 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:39 crc kubenswrapper[5119]: I0121 09:55:39.507344 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:55:39 crc kubenswrapper[5119]: E0121 09:55:39.523172 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:55:39 crc kubenswrapper[5119]: I0121 09:55:39.532698 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:40 crc kubenswrapper[5119]: I0121 09:55:40.530319 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:40 crc kubenswrapper[5119]: I0121 09:55:40.765108 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:55:40 crc kubenswrapper[5119]: I0121 09:55:40.765972 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:40 crc kubenswrapper[5119]: I0121 09:55:40.766712 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:40 crc kubenswrapper[5119]: I0121 09:55:40.766750 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:40 crc kubenswrapper[5119]: I0121 09:55:40.766762 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:40 crc kubenswrapper[5119]: E0121 09:55:40.767093 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:40 crc kubenswrapper[5119]: I0121 09:55:40.767402 5119 scope.go:117] "RemoveContainer" containerID="c811334bae3827591740f0dae0b6af099e6b9574913f927c10c8480d23d8fedd" Jan 21 09:55:40 crc kubenswrapper[5119]: E0121 09:55:40.767709 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:55:40 crc kubenswrapper[5119]: E0121 09:55:40.772831 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6666da7f168\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6666da7f168 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:04.692404584 +0000 UTC m=+20.360496262,LastTimestamp:2026-01-21 09:55:40.767677166 +0000 UTC m=+56.435768844,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:41 crc kubenswrapper[5119]: I0121 09:55:41.374409 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:55:41 crc kubenswrapper[5119]: I0121 09:55:41.374655 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:41 crc kubenswrapper[5119]: I0121 09:55:41.375507 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:41 crc kubenswrapper[5119]: I0121 09:55:41.375555 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:41 crc kubenswrapper[5119]: I0121 09:55:41.375569 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:41 crc kubenswrapper[5119]: E0121 09:55:41.376012 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:41 crc kubenswrapper[5119]: I0121 09:55:41.376294 5119 scope.go:117] "RemoveContainer" containerID="c811334bae3827591740f0dae0b6af099e6b9574913f927c10c8480d23d8fedd" Jan 21 09:55:41 crc kubenswrapper[5119]: E0121 09:55:41.376500 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:55:41 crc kubenswrapper[5119]: E0121 09:55:41.384635 5119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb6666da7f168\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb6666da7f168 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:55:04.692404584 +0000 UTC m=+20.360496262,LastTimestamp:2026-01-21 09:55:41.376466176 +0000 UTC m=+57.044557854,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:55:41 crc kubenswrapper[5119]: I0121 09:55:41.531749 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:41 crc kubenswrapper[5119]: E0121 09:55:41.928430 5119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:55:42 crc kubenswrapper[5119]: E0121 09:55:42.204937 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:55:42 crc kubenswrapper[5119]: I0121 09:55:42.531552 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:43 crc kubenswrapper[5119]: I0121 09:55:43.526990 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:44 crc kubenswrapper[5119]: I0121 09:55:44.529205 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:44 crc kubenswrapper[5119]: E0121 09:55:44.661100 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:55:45 crc kubenswrapper[5119]: I0121 09:55:45.530040 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:46 crc kubenswrapper[5119]: I0121 09:55:46.523443 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:46 crc kubenswrapper[5119]: I0121 09:55:46.528479 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:46 crc kubenswrapper[5119]: I0121 09:55:46.528868 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:46 crc kubenswrapper[5119]: I0121 09:55:46.528896 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:46 crc kubenswrapper[5119]: I0121 09:55:46.528938 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:55:46 crc kubenswrapper[5119]: I0121 09:55:46.529977 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:46 crc kubenswrapper[5119]: E0121 09:55:46.539430 5119 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:55:47 crc kubenswrapper[5119]: I0121 09:55:47.531240 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:48 crc kubenswrapper[5119]: I0121 09:55:48.528960 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:49 crc kubenswrapper[5119]: E0121 09:55:49.210999 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:55:49 crc kubenswrapper[5119]: I0121 09:55:49.529374 5119 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:55:49 crc kubenswrapper[5119]: I0121 09:55:49.680248 5119 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-v2sb9" Jan 21 09:55:49 crc kubenswrapper[5119]: I0121 09:55:49.686217 5119 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-v2sb9" Jan 21 09:55:49 crc kubenswrapper[5119]: I0121 09:55:49.729377 5119 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 09:55:50 crc kubenswrapper[5119]: I0121 09:55:50.458711 5119 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 09:55:50 crc kubenswrapper[5119]: I0121 09:55:50.687287 5119 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-20 09:50:49 +0000 UTC" deadline="2026-02-12 22:52:18.472251096 +0000 UTC" Jan 21 09:55:50 crc kubenswrapper[5119]: I0121 09:55:50.687350 5119 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="540h56m27.78490775s" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.543928 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.545395 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.545458 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.545469 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.545561 5119 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.555157 5119 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.555495 5119 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 21 09:55:53 crc kubenswrapper[5119]: E0121 09:55:53.555598 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.564804 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.564829 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.564840 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.564853 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.564863 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:55:53Z","lastTransitionTime":"2026-01-21T09:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:55:53 crc kubenswrapper[5119]: E0121 09:55:53.578144 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.585561 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.585622 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.585636 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.585650 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.585659 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:55:53Z","lastTransitionTime":"2026-01-21T09:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:55:53 crc kubenswrapper[5119]: E0121 09:55:53.595012 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.601369 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.601394 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.601402 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.601413 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.601421 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:55:53Z","lastTransitionTime":"2026-01-21T09:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:55:53 crc kubenswrapper[5119]: E0121 09:55:53.610431 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.616892 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.616918 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.616926 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.616944 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:55:53 crc kubenswrapper[5119]: I0121 09:55:53.616962 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:55:53Z","lastTransitionTime":"2026-01-21T09:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:55:53 crc kubenswrapper[5119]: E0121 09:55:53.630881 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:55:53 crc kubenswrapper[5119]: E0121 09:55:53.631059 5119 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 09:55:53 crc kubenswrapper[5119]: E0121 09:55:53.631084 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:53 crc kubenswrapper[5119]: E0121 09:55:53.731360 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:53 crc kubenswrapper[5119]: E0121 09:55:53.832140 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:53 crc kubenswrapper[5119]: E0121 09:55:53.933390 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:54 crc kubenswrapper[5119]: E0121 09:55:54.034413 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:54 crc kubenswrapper[5119]: E0121 09:55:54.135282 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:54 crc kubenswrapper[5119]: E0121 09:55:54.235570 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:54 crc kubenswrapper[5119]: E0121 09:55:54.335899 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:54 crc kubenswrapper[5119]: E0121 09:55:54.436799 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:54 crc kubenswrapper[5119]: E0121 09:55:54.537286 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:54 crc kubenswrapper[5119]: E0121 09:55:54.637587 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:54 crc kubenswrapper[5119]: E0121 09:55:54.662172 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:55:54 crc kubenswrapper[5119]: E0121 09:55:54.738692 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:54 crc kubenswrapper[5119]: E0121 09:55:54.839135 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:54 crc kubenswrapper[5119]: E0121 09:55:54.940043 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.040235 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.140998 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.241624 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.342329 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.443039 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.543904 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:55 crc kubenswrapper[5119]: I0121 09:55:55.591042 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:55 crc kubenswrapper[5119]: I0121 09:55:55.592012 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:55 crc kubenswrapper[5119]: I0121 09:55:55.592120 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:55 crc kubenswrapper[5119]: I0121 09:55:55.592230 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.592900 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:55 crc kubenswrapper[5119]: I0121 09:55:55.593238 5119 scope.go:117] "RemoveContainer" containerID="c811334bae3827591740f0dae0b6af099e6b9574913f927c10c8480d23d8fedd" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.652350 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.753729 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:55 crc kubenswrapper[5119]: I0121 09:55:55.841297 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 09:55:55 crc kubenswrapper[5119]: I0121 09:55:55.843046 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8"} Jan 21 09:55:55 crc kubenswrapper[5119]: I0121 09:55:55.843374 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:55 crc kubenswrapper[5119]: I0121 09:55:55.843995 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:55 crc kubenswrapper[5119]: I0121 09:55:55.844028 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:55 crc kubenswrapper[5119]: I0121 09:55:55.844041 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.844394 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.854323 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:55 crc kubenswrapper[5119]: E0121 09:55:55.955161 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:56 crc kubenswrapper[5119]: E0121 09:55:56.055443 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:56 crc kubenswrapper[5119]: E0121 09:55:56.156303 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:56 crc kubenswrapper[5119]: E0121 09:55:56.257212 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:56 crc kubenswrapper[5119]: E0121 09:55:56.358122 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:56 crc kubenswrapper[5119]: E0121 09:55:56.458652 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:56 crc kubenswrapper[5119]: E0121 09:55:56.559082 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:56 crc kubenswrapper[5119]: I0121 09:55:56.590851 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:56 crc kubenswrapper[5119]: I0121 09:55:56.591581 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:56 crc kubenswrapper[5119]: I0121 09:55:56.591711 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:56 crc kubenswrapper[5119]: I0121 09:55:56.591773 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:56 crc kubenswrapper[5119]: E0121 09:55:56.592139 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:56 crc kubenswrapper[5119]: E0121 09:55:56.659355 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:56 crc kubenswrapper[5119]: E0121 09:55:56.760305 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:56 crc kubenswrapper[5119]: E0121 09:55:56.861232 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:56 crc kubenswrapper[5119]: E0121 09:55:56.961740 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.062201 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.163125 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.263992 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.364657 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.465980 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.566629 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.667653 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.768773 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:57 crc kubenswrapper[5119]: I0121 09:55:57.849486 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 09:55:57 crc kubenswrapper[5119]: I0121 09:55:57.850050 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 09:55:57 crc kubenswrapper[5119]: I0121 09:55:57.851301 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8" exitCode=255 Jan 21 09:55:57 crc kubenswrapper[5119]: I0121 09:55:57.851358 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8"} Jan 21 09:55:57 crc kubenswrapper[5119]: I0121 09:55:57.851397 5119 scope.go:117] "RemoveContainer" containerID="c811334bae3827591740f0dae0b6af099e6b9574913f927c10c8480d23d8fedd" Jan 21 09:55:57 crc kubenswrapper[5119]: I0121 09:55:57.851650 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:55:57 crc kubenswrapper[5119]: I0121 09:55:57.852321 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:55:57 crc kubenswrapper[5119]: I0121 09:55:57.852352 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:55:57 crc kubenswrapper[5119]: I0121 09:55:57.852365 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.852835 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:55:57 crc kubenswrapper[5119]: I0121 09:55:57.853110 5119 scope.go:117] "RemoveContainer" containerID="af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.853318 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.869328 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:57 crc kubenswrapper[5119]: E0121 09:55:57.969756 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:58 crc kubenswrapper[5119]: E0121 09:55:58.070116 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:58 crc kubenswrapper[5119]: E0121 09:55:58.170930 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:58 crc kubenswrapper[5119]: E0121 09:55:58.271657 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:58 crc kubenswrapper[5119]: E0121 09:55:58.372595 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:58 crc kubenswrapper[5119]: E0121 09:55:58.473716 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:58 crc kubenswrapper[5119]: E0121 09:55:58.574027 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:58 crc kubenswrapper[5119]: E0121 09:55:58.675918 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:58 crc kubenswrapper[5119]: E0121 09:55:58.776801 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:58 crc kubenswrapper[5119]: I0121 09:55:58.854731 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 09:55:58 crc kubenswrapper[5119]: E0121 09:55:58.877673 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:58 crc kubenswrapper[5119]: E0121 09:55:58.978125 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:59 crc kubenswrapper[5119]: E0121 09:55:59.078373 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:59 crc kubenswrapper[5119]: E0121 09:55:59.179306 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:59 crc kubenswrapper[5119]: E0121 09:55:59.279898 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:59 crc kubenswrapper[5119]: E0121 09:55:59.380812 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:59 crc kubenswrapper[5119]: E0121 09:55:59.481312 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:59 crc kubenswrapper[5119]: E0121 09:55:59.582098 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:59 crc kubenswrapper[5119]: E0121 09:55:59.682839 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:59 crc kubenswrapper[5119]: E0121 09:55:59.784676 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:59 crc kubenswrapper[5119]: E0121 09:55:59.886124 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:55:59 crc kubenswrapper[5119]: E0121 09:55:59.987171 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:00 crc kubenswrapper[5119]: E0121 09:56:00.088323 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:00 crc kubenswrapper[5119]: E0121 09:56:00.189108 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:00 crc kubenswrapper[5119]: E0121 09:56:00.290093 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:00 crc kubenswrapper[5119]: E0121 09:56:00.390932 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:00 crc kubenswrapper[5119]: E0121 09:56:00.492033 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:00 crc kubenswrapper[5119]: E0121 09:56:00.592855 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:00 crc kubenswrapper[5119]: E0121 09:56:00.694022 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:00 crc kubenswrapper[5119]: E0121 09:56:00.794574 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:00 crc kubenswrapper[5119]: E0121 09:56:00.895264 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:00 crc kubenswrapper[5119]: E0121 09:56:00.995777 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:01 crc kubenswrapper[5119]: E0121 09:56:01.096057 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:01 crc kubenswrapper[5119]: E0121 09:56:01.196951 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:01 crc kubenswrapper[5119]: E0121 09:56:01.297997 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:01 crc kubenswrapper[5119]: I0121 09:56:01.374943 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:56:01 crc kubenswrapper[5119]: I0121 09:56:01.375637 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:56:01 crc kubenswrapper[5119]: I0121 09:56:01.376946 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:01 crc kubenswrapper[5119]: I0121 09:56:01.377048 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:01 crc kubenswrapper[5119]: I0121 09:56:01.377075 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:01 crc kubenswrapper[5119]: E0121 09:56:01.378734 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:56:01 crc kubenswrapper[5119]: I0121 09:56:01.380333 5119 scope.go:117] "RemoveContainer" containerID="af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8" Jan 21 09:56:01 crc kubenswrapper[5119]: E0121 09:56:01.380924 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:56:01 crc kubenswrapper[5119]: E0121 09:56:01.398372 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:01 crc kubenswrapper[5119]: E0121 09:56:01.498825 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:01 crc kubenswrapper[5119]: E0121 09:56:01.599322 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:01 crc kubenswrapper[5119]: I0121 09:56:01.668439 5119 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:56:01 crc kubenswrapper[5119]: E0121 09:56:01.700064 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:01 crc kubenswrapper[5119]: E0121 09:56:01.801198 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:01 crc kubenswrapper[5119]: E0121 09:56:01.901308 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:02 crc kubenswrapper[5119]: E0121 09:56:02.001459 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:02 crc kubenswrapper[5119]: E0121 09:56:02.101905 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:02 crc kubenswrapper[5119]: E0121 09:56:02.202015 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:02 crc kubenswrapper[5119]: E0121 09:56:02.302779 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:02 crc kubenswrapper[5119]: E0121 09:56:02.403636 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:02 crc kubenswrapper[5119]: E0121 09:56:02.503746 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:02 crc kubenswrapper[5119]: E0121 09:56:02.604484 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:02 crc kubenswrapper[5119]: E0121 09:56:02.704904 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:02 crc kubenswrapper[5119]: E0121 09:56:02.805339 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:02 crc kubenswrapper[5119]: E0121 09:56:02.906292 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.006518 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.107554 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.208440 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.309377 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.410568 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.511823 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.613073 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.643675 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.647682 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.647720 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.647732 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.647745 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.647756 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:03Z","lastTransitionTime":"2026-01-21T09:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.657551 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.661211 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.661237 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.661246 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.661258 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.661267 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:03Z","lastTransitionTime":"2026-01-21T09:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.673773 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.677636 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.677816 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.677900 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.678004 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.678104 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:03Z","lastTransitionTime":"2026-01-21T09:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.692307 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.695080 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.695145 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.695159 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.695197 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:03 crc kubenswrapper[5119]: I0121 09:56:03.695209 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:03Z","lastTransitionTime":"2026-01-21T09:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.705860 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.706003 5119 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.714042 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.815123 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:03 crc kubenswrapper[5119]: E0121 09:56:03.915686 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:04 crc kubenswrapper[5119]: E0121 09:56:04.016816 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:04 crc kubenswrapper[5119]: E0121 09:56:04.117899 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:04 crc kubenswrapper[5119]: E0121 09:56:04.218948 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:04 crc kubenswrapper[5119]: E0121 09:56:04.319075 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:04 crc kubenswrapper[5119]: E0121 09:56:04.419869 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:04 crc kubenswrapper[5119]: E0121 09:56:04.520645 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:04 crc kubenswrapper[5119]: E0121 09:56:04.621562 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:04 crc kubenswrapper[5119]: E0121 09:56:04.663286 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:56:04 crc kubenswrapper[5119]: E0121 09:56:04.721912 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:04 crc kubenswrapper[5119]: E0121 09:56:04.822591 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:04 crc kubenswrapper[5119]: E0121 09:56:04.923385 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.023690 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.124236 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.224399 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.324675 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.425539 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.526119 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:05 crc kubenswrapper[5119]: I0121 09:56:05.590252 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:56:05 crc kubenswrapper[5119]: I0121 09:56:05.591277 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:05 crc kubenswrapper[5119]: I0121 09:56:05.591341 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:05 crc kubenswrapper[5119]: I0121 09:56:05.591360 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.592152 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.627065 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.727644 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.828525 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:05 crc kubenswrapper[5119]: I0121 09:56:05.844244 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:56:05 crc kubenswrapper[5119]: I0121 09:56:05.844525 5119 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:56:05 crc kubenswrapper[5119]: I0121 09:56:05.845461 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:05 crc kubenswrapper[5119]: I0121 09:56:05.845537 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:05 crc kubenswrapper[5119]: I0121 09:56:05.845551 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.846131 5119 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:56:05 crc kubenswrapper[5119]: I0121 09:56:05.846527 5119 scope.go:117] "RemoveContainer" containerID="af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.846807 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:56:05 crc kubenswrapper[5119]: E0121 09:56:05.929414 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:06 crc kubenswrapper[5119]: E0121 09:56:06.030063 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:06 crc kubenswrapper[5119]: E0121 09:56:06.130257 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:06 crc kubenswrapper[5119]: E0121 09:56:06.231129 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:06 crc kubenswrapper[5119]: E0121 09:56:06.331582 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:06 crc kubenswrapper[5119]: E0121 09:56:06.432491 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:06 crc kubenswrapper[5119]: E0121 09:56:06.533071 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:06 crc kubenswrapper[5119]: E0121 09:56:06.633441 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:06 crc kubenswrapper[5119]: E0121 09:56:06.733841 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:06 crc kubenswrapper[5119]: E0121 09:56:06.834838 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:06 crc kubenswrapper[5119]: E0121 09:56:06.935677 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:07 crc kubenswrapper[5119]: E0121 09:56:07.036391 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:07 crc kubenswrapper[5119]: E0121 09:56:07.137419 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:07 crc kubenswrapper[5119]: E0121 09:56:07.237868 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:07 crc kubenswrapper[5119]: E0121 09:56:07.338947 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:07 crc kubenswrapper[5119]: E0121 09:56:07.440044 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:07 crc kubenswrapper[5119]: E0121 09:56:07.541338 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:07 crc kubenswrapper[5119]: E0121 09:56:07.642099 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:07 crc kubenswrapper[5119]: E0121 09:56:07.743222 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:07 crc kubenswrapper[5119]: E0121 09:56:07.844264 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:07 crc kubenswrapper[5119]: E0121 09:56:07.945618 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:08 crc kubenswrapper[5119]: E0121 09:56:08.045729 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:08 crc kubenswrapper[5119]: E0121 09:56:08.146136 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:08 crc kubenswrapper[5119]: E0121 09:56:08.246262 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:08 crc kubenswrapper[5119]: E0121 09:56:08.346578 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:08 crc kubenswrapper[5119]: E0121 09:56:08.447314 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:08 crc kubenswrapper[5119]: E0121 09:56:08.548516 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:08 crc kubenswrapper[5119]: E0121 09:56:08.649026 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:08 crc kubenswrapper[5119]: E0121 09:56:08.749900 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:08 crc kubenswrapper[5119]: E0121 09:56:08.850559 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:08 crc kubenswrapper[5119]: E0121 09:56:08.950957 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:09 crc kubenswrapper[5119]: E0121 09:56:09.051066 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:09 crc kubenswrapper[5119]: E0121 09:56:09.151843 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:09 crc kubenswrapper[5119]: E0121 09:56:09.252027 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:09 crc kubenswrapper[5119]: E0121 09:56:09.353180 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:09 crc kubenswrapper[5119]: E0121 09:56:09.453769 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:09 crc kubenswrapper[5119]: E0121 09:56:09.554911 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:09 crc kubenswrapper[5119]: E0121 09:56:09.655863 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:09 crc kubenswrapper[5119]: E0121 09:56:09.756251 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:09 crc kubenswrapper[5119]: E0121 09:56:09.857208 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:09 crc kubenswrapper[5119]: E0121 09:56:09.957383 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:10 crc kubenswrapper[5119]: E0121 09:56:10.057774 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:10 crc kubenswrapper[5119]: E0121 09:56:10.158941 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:10 crc kubenswrapper[5119]: E0121 09:56:10.259505 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:10 crc kubenswrapper[5119]: E0121 09:56:10.360682 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:10 crc kubenswrapper[5119]: E0121 09:56:10.460926 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:10 crc kubenswrapper[5119]: E0121 09:56:10.561690 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:10 crc kubenswrapper[5119]: E0121 09:56:10.662481 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:10 crc kubenswrapper[5119]: E0121 09:56:10.763156 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:10 crc kubenswrapper[5119]: E0121 09:56:10.863725 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:10 crc kubenswrapper[5119]: E0121 09:56:10.964812 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:11 crc kubenswrapper[5119]: E0121 09:56:11.065389 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:11 crc kubenswrapper[5119]: E0121 09:56:11.166318 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:11 crc kubenswrapper[5119]: E0121 09:56:11.267255 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:11 crc kubenswrapper[5119]: E0121 09:56:11.368020 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:11 crc kubenswrapper[5119]: I0121 09:56:11.424291 5119 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:56:11 crc kubenswrapper[5119]: E0121 09:56:11.468540 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:11 crc kubenswrapper[5119]: E0121 09:56:11.569202 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:11 crc kubenswrapper[5119]: E0121 09:56:11.670030 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:11 crc kubenswrapper[5119]: E0121 09:56:11.770875 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:11 crc kubenswrapper[5119]: E0121 09:56:11.871897 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:11 crc kubenswrapper[5119]: E0121 09:56:11.973034 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:12 crc kubenswrapper[5119]: E0121 09:56:12.073208 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:12 crc kubenswrapper[5119]: E0121 09:56:12.174131 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:12 crc kubenswrapper[5119]: E0121 09:56:12.274384 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:12 crc kubenswrapper[5119]: E0121 09:56:12.375405 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:12 crc kubenswrapper[5119]: E0121 09:56:12.476309 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:12 crc kubenswrapper[5119]: E0121 09:56:12.577166 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:12 crc kubenswrapper[5119]: E0121 09:56:12.677436 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:12 crc kubenswrapper[5119]: E0121 09:56:12.777834 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:12 crc kubenswrapper[5119]: E0121 09:56:12.878533 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:12 crc kubenswrapper[5119]: E0121 09:56:12.979567 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.079738 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.180090 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.281170 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.382188 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.483050 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.583814 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.684559 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.785597 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.818592 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.824113 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.824175 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.824196 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.824227 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.824251 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:13Z","lastTransitionTime":"2026-01-21T09:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.836758 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.842768 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.842839 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.842866 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.842898 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.842924 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:13Z","lastTransitionTime":"2026-01-21T09:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.856142 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.860689 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.860743 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.860754 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.860773 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.860786 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:13Z","lastTransitionTime":"2026-01-21T09:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.879010 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.883691 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.883771 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.883787 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.883808 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:13 crc kubenswrapper[5119]: I0121 09:56:13.883824 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:13Z","lastTransitionTime":"2026-01-21T09:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.900874 5119 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400464Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861264Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"22713055-0a0e-45db-9c06-ee5e114186db\\\",\\\"systemUUID\\\":\\\"53b028bb-3049-4b24-a969-520f108bc223\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.901036 5119 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 09:56:13 crc kubenswrapper[5119]: E0121 09:56:13.901076 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:14 crc kubenswrapper[5119]: E0121 09:56:14.002192 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:14 crc kubenswrapper[5119]: E0121 09:56:14.103391 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:14 crc kubenswrapper[5119]: E0121 09:56:14.204542 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:14 crc kubenswrapper[5119]: E0121 09:56:14.304933 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:14 crc kubenswrapper[5119]: E0121 09:56:14.405565 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:14 crc kubenswrapper[5119]: E0121 09:56:14.506050 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:14 crc kubenswrapper[5119]: E0121 09:56:14.606738 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:14 crc kubenswrapper[5119]: E0121 09:56:14.664566 5119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:56:14 crc kubenswrapper[5119]: E0121 09:56:14.707235 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:14 crc kubenswrapper[5119]: E0121 09:56:14.807478 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:14 crc kubenswrapper[5119]: E0121 09:56:14.909439 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:15 crc kubenswrapper[5119]: E0121 09:56:15.010349 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:15 crc kubenswrapper[5119]: E0121 09:56:15.111027 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:15 crc kubenswrapper[5119]: E0121 09:56:15.212244 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:15 crc kubenswrapper[5119]: E0121 09:56:15.313468 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:15 crc kubenswrapper[5119]: E0121 09:56:15.414262 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:15 crc kubenswrapper[5119]: E0121 09:56:15.514919 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:15 crc kubenswrapper[5119]: E0121 09:56:15.615938 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:15 crc kubenswrapper[5119]: E0121 09:56:15.716222 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:15 crc kubenswrapper[5119]: E0121 09:56:15.816438 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:15 crc kubenswrapper[5119]: E0121 09:56:15.916727 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:16 crc kubenswrapper[5119]: E0121 09:56:16.017489 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:16 crc kubenswrapper[5119]: E0121 09:56:16.118059 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:16 crc kubenswrapper[5119]: E0121 09:56:16.218181 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:16 crc kubenswrapper[5119]: E0121 09:56:16.319165 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:16 crc kubenswrapper[5119]: E0121 09:56:16.419574 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:16 crc kubenswrapper[5119]: E0121 09:56:16.520526 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:16 crc kubenswrapper[5119]: E0121 09:56:16.621571 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:16 crc kubenswrapper[5119]: E0121 09:56:16.722817 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:16 crc kubenswrapper[5119]: E0121 09:56:16.823755 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:16 crc kubenswrapper[5119]: E0121 09:56:16.925183 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:17 crc kubenswrapper[5119]: E0121 09:56:17.025587 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:17 crc kubenswrapper[5119]: E0121 09:56:17.125829 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:17 crc kubenswrapper[5119]: E0121 09:56:17.226467 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:17 crc kubenswrapper[5119]: E0121 09:56:17.327653 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:17 crc kubenswrapper[5119]: E0121 09:56:17.428142 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:17 crc kubenswrapper[5119]: E0121 09:56:17.529211 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:17 crc kubenswrapper[5119]: E0121 09:56:17.629600 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:17 crc kubenswrapper[5119]: E0121 09:56:17.730457 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:17 crc kubenswrapper[5119]: E0121 09:56:17.830745 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:17 crc kubenswrapper[5119]: E0121 09:56:17.931414 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:18 crc kubenswrapper[5119]: E0121 09:56:18.031816 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:18 crc kubenswrapper[5119]: E0121 09:56:18.132817 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:18 crc kubenswrapper[5119]: E0121 09:56:18.233580 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:18 crc kubenswrapper[5119]: E0121 09:56:18.334696 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:18 crc kubenswrapper[5119]: E0121 09:56:18.435813 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:18 crc kubenswrapper[5119]: E0121 09:56:18.535929 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:18 crc kubenswrapper[5119]: E0121 09:56:18.636069 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:18 crc kubenswrapper[5119]: E0121 09:56:18.736980 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:18 crc kubenswrapper[5119]: E0121 09:56:18.837549 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:18 crc kubenswrapper[5119]: E0121 09:56:18.939207 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.039750 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.140636 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.241741 5119 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.275484 5119 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.337467 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.344255 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.344461 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.344531 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.344636 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.344716 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:19Z","lastTransitionTime":"2026-01-21T09:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.347181 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.446324 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.446359 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.446369 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.446382 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.446391 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:19Z","lastTransitionTime":"2026-01-21T09:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.449011 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.548541 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.548579 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.548589 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.548617 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.548627 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:19Z","lastTransitionTime":"2026-01-21T09:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.548648 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.570211 5119 apiserver.go:52] "Watching apiserver" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.575541 5119 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.576215 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/iptables-alerter-5jnd7","openshift-dns/node-resolver-xj6nb","openshift-image-registry/node-ca-42nn8","openshift-ovn-kubernetes/ovnkube-node-lnxvl","openshift-etcd/etcd-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/network-metrics-daemon-fk2f6","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv","openshift-machine-config-operator/machine-config-daemon-5vwrk","openshift-multus/multus-7d4r9","openshift-multus/multus-additional-cni-plugins-lpnb6","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-node-identity/network-node-identity-dgvkt"] Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.577904 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.579515 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.579513 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.579717 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.579771 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.579765 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.581263 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.581330 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.582069 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.583215 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.583221 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.583286 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.583500 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.583562 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.583829 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.584150 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.584559 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.585206 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.591144 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.591254 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fk2f6" podUID="0e481d9e-6dd0-4c5e-bb9a-33546cb7715d" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.591297 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.592498 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.592580 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.592661 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.593153 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.593173 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.593455 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.595702 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.596533 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.598862 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.598981 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.599264 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.599647 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.599796 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.601394 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.602794 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.602886 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.603696 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.603994 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.604047 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.605472 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.608297 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.608814 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.609244 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.613799 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.614264 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.615551 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.616316 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.616702 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.616783 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-42nn8" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.619378 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.619445 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.619446 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.619653 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.620205 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xj6nb" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.621586 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.622052 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.622032 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.623017 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fk2f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9d9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9d9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:56:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fk2f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.634227 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.638385 5119 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.638725 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.638881 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.639004 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.639212 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.639309 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.639442 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.639541 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.639748 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.639563 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.640019 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.640102 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.640260 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.640268 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.640352 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.641332 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.640504 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.641559 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.641779 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.642641 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.642905 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.643122 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.643725 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.642132 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.642502 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.642799 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.643353 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.643535 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.643857 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.643968 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.643992 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644009 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644026 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644044 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644060 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644081 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644096 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644112 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644132 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644149 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644165 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644182 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644201 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644217 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644233 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644248 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644264 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644280 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644296 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644311 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644330 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644350 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644481 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644494 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644545 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644572 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644620 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644646 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644672 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644700 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644730 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644757 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644830 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644858 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644878 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644900 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644922 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644948 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644969 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.644994 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645017 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645041 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645055 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645066 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645110 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645160 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645195 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645218 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645340 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645373 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645402 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645432 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645456 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645479 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645503 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645559 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645583 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645626 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645652 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645675 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645697 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645708 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645719 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645745 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645766 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645875 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645907 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.645934 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.646214 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.646246 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.646508 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.646572 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.646639 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.646647 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.646914 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647092 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647099 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647184 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.646685 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647393 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647422 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647544 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647565 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647567 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647581 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647743 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647796 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647836 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647875 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647913 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647958 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647972 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.647995 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648007 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648037 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648071 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648079 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648134 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648172 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648206 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648213 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648241 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648379 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648284 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648422 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648438 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648470 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648508 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648640 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648686 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648736 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648777 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648825 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648876 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648924 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648973 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.649185 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.649257 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.649307 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.649358 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.649415 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.648509 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.649410 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.649560 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.649873 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.649966 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.650011 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.650247 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.650402 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.650875 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.650955 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652378 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652541 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652579 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652658 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652666 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652721 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652744 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652762 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652767 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652782 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652799 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652816 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652832 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652854 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652874 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652893 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652894 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652912 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.652965 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653183 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653198 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653256 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653302 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653341 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653820 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653847 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653867 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653885 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653906 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653930 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653948 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653970 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653992 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.653992 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654013 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654037 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654058 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654076 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654093 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654212 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654217 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654236 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654410 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654449 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654473 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654492 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654509 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654527 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654551 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654577 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654678 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654711 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654759 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654777 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654796 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654815 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654834 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654851 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654871 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654888 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654908 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654927 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654945 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654962 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654978 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.654998 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655014 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655032 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655052 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655070 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655088 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655108 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655129 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655150 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655168 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655187 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655206 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655225 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655243 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655260 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655277 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655299 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655407 5119 scope.go:117] "RemoveContainer" containerID="af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655301 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655759 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655768 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655783 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.655792 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:19Z","lastTransitionTime":"2026-01-21T09:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.655843 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656061 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656725 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656756 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656777 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656798 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656816 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656833 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656852 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656871 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656890 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656909 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656928 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656946 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656972 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.656989 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657007 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657025 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657045 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657064 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657082 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657100 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657123 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657144 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657160 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657180 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657199 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657215 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657236 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657255 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657274 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657292 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657311 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657329 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657374 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657402 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657425 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657445 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657467 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657486 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657508 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657526 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657557 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657657 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/20b7f175-32b1-486b-b6c0-8c12a6ad8338-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657681 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657698 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-system-cni-dir\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657715 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-os-release\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657732 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-node-log\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657760 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657779 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657797 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0eddaf40-46ea-4d13-b78e-a1f4c439795d-host\") pod \"node-ca-42nn8\" (UID: \"0eddaf40-46ea-4d13-b78e-a1f4c439795d\") " pod="openshift-image-registry/node-ca-42nn8" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657814 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-os-release\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657831 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/20b7f175-32b1-486b-b6c0-8c12a6ad8338-cni-binary-copy\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657849 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-run-multus-certs\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657879 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657898 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-var-lib-kubelet\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657917 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657939 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.657957 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/20b7f175-32b1-486b-b6c0-8c12a6ad8338-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.658145 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-cnibin\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.658164 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3c35acb-afad-4124-a4e6-bf36f963ecbf-cni-binary-copy\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.658184 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-run-k8s-cni-cncf-io\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.658204 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-hostroot\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.658392 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.658813 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.659036 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.659357 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.659403 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsct\" (UniqueName: \"kubernetes.io/projected/0eddaf40-46ea-4d13-b78e-a1f4c439795d-kube-api-access-wmsct\") pod \"node-ca-42nn8\" (UID: \"0eddaf40-46ea-4d13-b78e-a1f4c439795d\") " pod="openshift-image-registry/node-ca-42nn8" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.659792 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.659937 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxw7n\" (UniqueName: \"kubernetes.io/projected/20b7f175-32b1-486b-b6c0-8c12a6ad8338-kube-api-access-cxw7n\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.659978 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-var-lib-cni-bin\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660005 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-conf-dir\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660031 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660055 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-rootfs\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660076 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-systemd-units\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660102 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-log-socket\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660129 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-netd\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660154 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-env-overrides\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660173 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-script-lib\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660199 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/766a5e24-f953-49f2-b732-1a783ea97e3f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660220 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-cnibin\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660242 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-ovn\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660260 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-ovn-kubernetes\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660276 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovn-node-metrics-cert\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660295 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vw82\" (UniqueName: \"kubernetes.io/projected/387b63b2-bff9-43f3-9a3d-0b81aec7f5a7-kube-api-access-9vw82\") pod \"node-resolver-xj6nb\" (UID: \"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7\") " pod="openshift-dns/node-resolver-xj6nb" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660324 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0eddaf40-46ea-4d13-b78e-a1f4c439795d-serviceca\") pod \"node-ca-42nn8\" (UID: \"0eddaf40-46ea-4d13-b78e-a1f4c439795d\") " pod="openshift-image-registry/node-ca-42nn8" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660347 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-system-cni-dir\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660375 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660402 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-cni-dir\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660425 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-socket-dir-parent\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660449 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-daemon-config\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660473 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-slash\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660497 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-netns\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660521 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-etc-openvswitch\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660542 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-openvswitch\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660559 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-bin\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660575 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-run-netns\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660684 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-etc-kubernetes\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660709 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.660732 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-proxy-tls\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662321 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662467 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-systemd\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662525 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxd4\" (UniqueName: \"kubernetes.io/projected/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-kube-api-access-qhxd4\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662661 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662738 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662785 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662835 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9d9w\" (UniqueName: \"kubernetes.io/projected/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-kube-api-access-z9d9w\") pod \"network-metrics-daemon-fk2f6\" (UID: \"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\") " pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662888 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrqdj\" (UniqueName: \"kubernetes.io/projected/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-kube-api-access-xrqdj\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662932 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665968 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766a5e24-f953-49f2-b732-1a783ea97e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:56:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-wkwlv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662357 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.666968 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662515 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662541 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662833 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.662970 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:20.162928472 +0000 UTC m=+95.831020190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.663514 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.663595 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.663557 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.663949 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.663996 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.664121 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.664417 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.664565 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.664687 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.664745 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.664992 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665023 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665137 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665195 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665369 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665469 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665595 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665508 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665666 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665796 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665816 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665835 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665850 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.665854 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.666330 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.666562 5119 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.669895 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.670128 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:20.170105744 +0000 UTC m=+95.838197422 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.674263 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.674103 5119 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.674343 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.674425 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.674533 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.674563 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.674643 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.674883 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.673486 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.667434 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.666269 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.667768 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.667934 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.668029 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.668185 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675110 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.668255 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.668400 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.668463 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.668525 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.668329 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.668818 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.668872 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.669091 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.669203 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.666635 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.669221 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.667280 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.666938 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.670214 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.670975 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.671017 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.671054 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.671071 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.671269 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.671496 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.671524 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.671947 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.672025 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.672119 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.662408 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.672367 5119 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.672658 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.673383 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.673532 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.673531 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.674036 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.674165 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675380 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675416 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675441 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675468 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs\") pod \"network-metrics-daemon-fk2f6\" (UID: \"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\") " pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675494 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-kubelet\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675514 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlg2s\" (UniqueName: \"kubernetes.io/projected/c3c35acb-afad-4124-a4e6-bf36f963ecbf-kube-api-access-xlg2s\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675507 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675536 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/387b63b2-bff9-43f3-9a3d-0b81aec7f5a7-hosts-file\") pod \"node-resolver-xj6nb\" (UID: \"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7\") " pod="openshift-dns/node-resolver-xj6nb" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675558 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btx85\" (UniqueName: \"kubernetes.io/projected/766a5e24-f953-49f2-b732-1a783ea97e3f-kube-api-access-btx85\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675580 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675598 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/387b63b2-bff9-43f3-9a3d-0b81aec7f5a7-tmp-dir\") pod \"node-resolver-xj6nb\" (UID: \"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7\") " pod="openshift-dns/node-resolver-xj6nb" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675645 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-var-lib-cni-multus\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675667 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-var-lib-openvswitch\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675684 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-config\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675821 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675835 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675846 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675855 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675869 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675879 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675890 5119 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675901 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675911 5119 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675923 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675933 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675944 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675953 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675955 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.675963 5119 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676008 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676026 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676041 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676052 5119 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676065 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676076 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676087 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676100 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676112 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676125 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676136 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676146 5119 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676157 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676243 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676690 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.676793 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.677064 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.677353 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.677396 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.678104 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679007 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679034 5119 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679079 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679210 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679244 5119 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679269 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679294 5119 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679372 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679410 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679416 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679433 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679474 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679501 5119 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679526 5119 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679543 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679554 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679561 5119 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679590 5119 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679625 5119 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679645 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679660 5119 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679675 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679694 5119 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679707 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679722 5119 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679735 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679749 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679762 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679776 5119 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679838 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679854 5119 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.679881 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:20.179859645 +0000 UTC m=+95.847951323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679894 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679909 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679923 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679936 5119 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679948 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679959 5119 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679971 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679983 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.679996 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680011 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680024 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680036 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680049 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680063 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680075 5119 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680087 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680100 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680111 5119 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680123 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680134 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680145 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680156 5119 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680166 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680175 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680205 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680214 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680223 5119 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680235 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680244 5119 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680254 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680263 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680272 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680280 5119 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680290 5119 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680298 5119 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680308 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680317 5119 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680327 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680360 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680372 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680381 5119 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680391 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680421 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680435 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680443 5119 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680453 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680482 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680493 5119 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680503 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680512 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680540 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.680550 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.684089 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.685129 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.685332 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.686315 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.686456 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.688259 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.688278 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.688289 5119 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.688342 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:20.188330601 +0000 UTC m=+95.856422279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.690851 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.691200 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.692166 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.693057 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.693789 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.694384 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.694644 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.695206 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.696096 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.696116 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.695382 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.696881 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.697306 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.697335 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.697347 5119 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.697436 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:20.197418174 +0000 UTC m=+95.865509852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.698279 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.698443 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.698953 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.699038 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.699069 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.699469 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.699551 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.699853 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.701051 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.701542 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.701766 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.701791 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.702129 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.702834 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.702868 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.702991 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.703188 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.703358 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.703777 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.703984 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.704026 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.704159 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.704220 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.704439 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.704468 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.704568 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.704997 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.705009 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.705494 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.705489 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.705225 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.705383 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.705384 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.705435 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.705659 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.705708 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.705928 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.706677 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.706683 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.707110 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.707275 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.707736 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.707824 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.708030 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.708742 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.709031 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.709225 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.709415 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.709714 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.709732 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.709801 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.709822 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.709890 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.709906 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.710293 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.710534 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.710651 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.712726 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.719555 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.720549 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.721361 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.721403 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhxd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhxd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhxd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhxd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhxd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhxd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhxd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhxd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhxd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:56:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lnxvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.732848 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6da918-cee3-4bfa-a950-66194867f664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:55:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02a66e8fce864e03922933d50d2f46c9723439d27f80c37cec85769c68188108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://470140ab3976c458184542c1dabf115e0e58166e29b3dea093441b4b8376e691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5cd6be5cbdd0fcbb91b735fb086052ec8f7c116137e8ffc5a7810ad3c41fd0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0d256e3818040b35be7b6695f5d3db72105283175c97210ab4812e3bc8b0c97f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d256e3818040b35be7b6695f5d3db72105283175c97210ab4812e3bc8b0c97f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:54:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:54:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.732933 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.735929 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.742643 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.744817 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.748453 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.750958 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.757781 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fk2f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9d9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9d9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:56:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fk2f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.759069 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.759120 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.759132 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.759146 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.759158 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:19Z","lastTransitionTime":"2026-01-21T09:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.773296 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b64d9f8a-ffbf-4c0c-b861-15970d921c8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://14b47c160ec5dffdee4357bbd382f1496b235b085c4fe443b01e73e9f877570c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dbe65977c6e0db625c023229341e3b84c9173e51006bf22d745ad47a1432d421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1e8064cabfb679b3604b92ee2d391248a40f938a1343e1d97413783f02c35748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://390b2ff8810cc1420f748190d4eaa2e29232197cc53ac8be36ee13312ac3dda2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e9cad5d6f6cab73e81cdd223b3f6781b26a930f25412d08728fa06e8858d06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ede3e82ab49565f7370837e3f7d418accce5fa866740ed81a80c838eb4246732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ede3e82ab49565f7370837e3f7d418accce5fa866740ed81a80c838eb4246732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:54:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://199845e27bc250344d296cb991fce2f8c77a2e2761012343e19c574acc1daa8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://199845e27bc250344d296cb991fce2f8c77a2e2761012343e19c574acc1daa8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:54:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://2ddea5f3c72381bc4a6dbbbf368733bfd085a6d72265fe838150007a05987932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddea5f3c72381bc4a6dbbbf368733bfd085a6d72265fe838150007a05987932\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:54:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:54:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781243 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781291 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-rootfs\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781316 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-systemd-units\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781337 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-log-socket\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781356 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-netd\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781380 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-env-overrides\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781398 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-script-lib\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781422 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/766a5e24-f953-49f2-b732-1a783ea97e3f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781445 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-cnibin\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781466 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-ovn\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781490 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-ovn-kubernetes\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781512 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovn-node-metrics-cert\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781534 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vw82\" (UniqueName: \"kubernetes.io/projected/387b63b2-bff9-43f3-9a3d-0b81aec7f5a7-kube-api-access-9vw82\") pod \"node-resolver-xj6nb\" (UID: \"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7\") " pod="openshift-dns/node-resolver-xj6nb" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781556 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0eddaf40-46ea-4d13-b78e-a1f4c439795d-serviceca\") pod \"node-ca-42nn8\" (UID: \"0eddaf40-46ea-4d13-b78e-a1f4c439795d\") " pod="openshift-image-registry/node-ca-42nn8" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781578 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-system-cni-dir\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781617 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781642 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-cni-dir\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781662 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-socket-dir-parent\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781686 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-daemon-config\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781709 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-slash\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781729 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-netns\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781750 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-etc-openvswitch\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781769 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-openvswitch\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781790 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-bin\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781829 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-run-netns\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781850 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-etc-kubernetes\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781873 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781895 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-proxy-tls\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781920 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-systemd\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781942 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qhxd4\" (UniqueName: \"kubernetes.io/projected/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-kube-api-access-qhxd4\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.781981 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9d9w\" (UniqueName: \"kubernetes.io/projected/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-kube-api-access-z9d9w\") pod \"network-metrics-daemon-fk2f6\" (UID: \"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\") " pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782005 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xrqdj\" (UniqueName: \"kubernetes.io/projected/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-kube-api-access-xrqdj\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782027 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782066 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs\") pod \"network-metrics-daemon-fk2f6\" (UID: \"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\") " pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782088 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-kubelet\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782112 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xlg2s\" (UniqueName: \"kubernetes.io/projected/c3c35acb-afad-4124-a4e6-bf36f963ecbf-kube-api-access-xlg2s\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782134 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/387b63b2-bff9-43f3-9a3d-0b81aec7f5a7-hosts-file\") pod \"node-resolver-xj6nb\" (UID: \"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7\") " pod="openshift-dns/node-resolver-xj6nb" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782135 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782158 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-btx85\" (UniqueName: \"kubernetes.io/projected/766a5e24-f953-49f2-b732-1a783ea97e3f-kube-api-access-btx85\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782110 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4d6aa2e-0817-4a4a-9903-dad63ddfc382\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa9e30fb91da794586d30c109e92837daa89933bdf20a8c7cb5319ebec439888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b3d1aa5783e1b0d52d0d3713d8185e31dde20546e122dbde9b9b5260b3ae2f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3d1aa5783e1b0d52d0d3713d8185e31dde20546e122dbde9b9b5260b3ae2f0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:54:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782186 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/387b63b2-bff9-43f3-9a3d-0b81aec7f5a7-tmp-dir\") pod \"node-resolver-xj6nb\" (UID: \"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7\") " pod="openshift-dns/node-resolver-xj6nb" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782362 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-slash\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782383 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-var-lib-cni-multus\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782331 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-systemd\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783583 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-var-lib-openvswitch\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783649 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-config\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783682 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/20b7f175-32b1-486b-b6c0-8c12a6ad8338-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782486 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-rootfs\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782540 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-log-socket\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782547 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-kubelet\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782641 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-run-netns\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.782637 5119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:56:19 crc kubenswrapper[5119]: E0121 09:56:19.783836 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs podName:0e481d9e-6dd0-4c5e-bb9a-33546cb7715d nodeName:}" failed. No retries permitted until 2026-01-21 09:56:20.283809913 +0000 UTC m=+95.951901591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs") pod "network-metrics-daemon-fk2f6" (UID: "0e481d9e-6dd0-4c5e-bb9a-33546cb7715d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782872 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-daemon-config\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782977 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-systemd-units\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783003 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-netns\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783018 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783022 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-system-cni-dir\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783040 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-env-overrides\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783050 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-etc-openvswitch\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783050 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-ovn-kubernetes\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783088 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-netd\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783095 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-etc-kubernetes\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783102 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783112 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-ovn\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783186 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-cni-dir\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783279 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-openvswitch\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783294 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783322 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-socket-dir-parent\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.783683 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-var-lib-openvswitch\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782480 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-bin\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784334 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-config\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784370 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784394 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-system-cni-dir\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784416 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-os-release\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784436 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-node-log\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784478 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0eddaf40-46ea-4d13-b78e-a1f4c439795d-host\") pod \"node-ca-42nn8\" (UID: \"0eddaf40-46ea-4d13-b78e-a1f4c439795d\") " pod="openshift-image-registry/node-ca-42nn8" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784507 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0eddaf40-46ea-4d13-b78e-a1f4c439795d-host\") pod \"node-ca-42nn8\" (UID: \"0eddaf40-46ea-4d13-b78e-a1f4c439795d\") " pod="openshift-image-registry/node-ca-42nn8" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784512 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-os-release\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784539 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/20b7f175-32b1-486b-b6c0-8c12a6ad8338-cni-binary-copy\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784557 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-run-multus-certs\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784576 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-var-lib-kubelet\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784593 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784634 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/20b7f175-32b1-486b-b6c0-8c12a6ad8338-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784651 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-os-release\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784662 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-cnibin\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784686 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3c35acb-afad-4124-a4e6-bf36f963ecbf-cni-binary-copy\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784686 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-node-log\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784572 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-system-cni-dir\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784713 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-var-lib-kubelet\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784738 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-run-k8s-cni-cncf-io\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784760 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-hostroot\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784785 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/20b7f175-32b1-486b-b6c0-8c12a6ad8338-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784835 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-os-release\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782457 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/20b7f175-32b1-486b-b6c0-8c12a6ad8338-cnibin\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785374 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782664 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/387b63b2-bff9-43f3-9a3d-0b81aec7f5a7-hosts-file\") pod \"node-resolver-xj6nb\" (UID: \"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7\") " pod="openshift-dns/node-resolver-xj6nb" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784789 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785626 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsct\" (UniqueName: \"kubernetes.io/projected/0eddaf40-46ea-4d13-b78e-a1f4c439795d-kube-api-access-wmsct\") pod \"node-ca-42nn8\" (UID: \"0eddaf40-46ea-4d13-b78e-a1f4c439795d\") " pod="openshift-image-registry/node-ca-42nn8" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785653 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cxw7n\" (UniqueName: \"kubernetes.io/projected/20b7f175-32b1-486b-b6c0-8c12a6ad8338-kube-api-access-cxw7n\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785675 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-var-lib-cni-bin\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785697 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-conf-dir\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785804 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785819 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785832 5119 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785847 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785848 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785858 5119 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785903 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785921 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785934 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785946 5119 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785958 5119 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785969 5119 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785980 5119 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.785991 5119 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786001 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-var-lib-cni-bin\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786005 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786027 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786040 5119 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786038 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-run-multus-certs\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786051 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784539 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786062 5119 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786074 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786085 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786095 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786100 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-cnibin\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786105 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786123 5119 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786129 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-run-k8s-cni-cncf-io\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786134 5119 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786145 5119 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786157 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786161 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-multus-conf-dir\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786168 5119 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786179 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786191 5119 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786201 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786213 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786224 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786235 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786245 5119 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786256 5119 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786266 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786277 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786289 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786299 5119 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786310 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786322 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786333 5119 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786346 5119 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786356 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786367 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786378 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786389 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786401 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786414 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786414 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-hostroot\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786425 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786739 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3c35acb-afad-4124-a4e6-bf36f963ecbf-cni-binary-copy\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.786945 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/20b7f175-32b1-486b-b6c0-8c12a6ad8338-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.782431 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3c35acb-afad-4124-a4e6-bf36f963ecbf-host-var-lib-cni-multus\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.787958 5119 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.787999 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788024 5119 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788045 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788065 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788085 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788104 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788161 5119 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788184 5119 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788203 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788224 5119 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788242 5119 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788261 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788281 5119 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788299 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788319 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788337 5119 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788356 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788378 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788399 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788417 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788440 5119 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788457 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788476 5119 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788495 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788514 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788534 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788555 5119 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788574 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788592 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788640 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788660 5119 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788680 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788698 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788717 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788735 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788756 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788774 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788792 5119 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788811 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788830 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788847 5119 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788868 5119 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788887 5119 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788905 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788925 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788943 5119 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788962 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.788981 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789001 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789022 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789039 5119 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789057 5119 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789076 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789097 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789116 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789138 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789158 5119 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789179 5119 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789203 5119 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789229 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789249 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789270 5119 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789288 5119 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789307 5119 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.789325 5119 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.791288 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/20b7f175-32b1-486b-b6c0-8c12a6ad8338-cni-binary-copy\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.795652 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/387b63b2-bff9-43f3-9a3d-0b81aec7f5a7-tmp-dir\") pod \"node-resolver-xj6nb\" (UID: \"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7\") " pod="openshift-dns/node-resolver-xj6nb" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.784924 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-script-lib\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.800105 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-proxy-tls\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.800375 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-btx85\" (UniqueName: \"kubernetes.io/projected/766a5e24-f953-49f2-b732-1a783ea97e3f-kube-api-access-btx85\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.801550 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0eddaf40-46ea-4d13-b78e-a1f4c439795d-serviceca\") pod \"node-ca-42nn8\" (UID: \"0eddaf40-46ea-4d13-b78e-a1f4c439795d\") " pod="openshift-image-registry/node-ca-42nn8" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.802025 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/766a5e24-f953-49f2-b732-1a783ea97e3f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-wkwlv\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.802548 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9d9w\" (UniqueName: \"kubernetes.io/projected/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-kube-api-access-z9d9w\") pod \"network-metrics-daemon-fk2f6\" (UID: \"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\") " pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.803037 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.803440 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovn-node-metrics-cert\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.806253 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsct\" (UniqueName: \"kubernetes.io/projected/0eddaf40-46ea-4d13-b78e-a1f4c439795d-kube-api-access-wmsct\") pod \"node-ca-42nn8\" (UID: \"0eddaf40-46ea-4d13-b78e-a1f4c439795d\") " pod="openshift-image-registry/node-ca-42nn8" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.807056 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlg2s\" (UniqueName: \"kubernetes.io/projected/c3c35acb-afad-4124-a4e6-bf36f963ecbf-kube-api-access-xlg2s\") pod \"multus-7d4r9\" (UID: \"c3c35acb-afad-4124-a4e6-bf36f963ecbf\") " pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.808405 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrqdj\" (UniqueName: \"kubernetes.io/projected/f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5-kube-api-access-xrqdj\") pod \"machine-config-daemon-5vwrk\" (UID: \"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\") " pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.810025 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxw7n\" (UniqueName: \"kubernetes.io/projected/20b7f175-32b1-486b-b6c0-8c12a6ad8338-kube-api-access-cxw7n\") pod \"multus-additional-cni-plugins-lpnb6\" (UID: \"20b7f175-32b1-486b-b6c0-8c12a6ad8338\") " pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.811169 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vw82\" (UniqueName: \"kubernetes.io/projected/387b63b2-bff9-43f3-9a3d-0b81aec7f5a7-kube-api-access-9vw82\") pod \"node-resolver-xj6nb\" (UID: \"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7\") " pod="openshift-dns/node-resolver-xj6nb" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.813011 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:56:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5vwrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.815772 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhxd4\" (UniqueName: \"kubernetes.io/projected/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-kube-api-access-qhxd4\") pod \"ovnkube-node-lnxvl\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.821097 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42nn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0eddaf40-46ea-4d13-b78e-a1f4c439795d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmsct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:56:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42nn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.830263 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.838012 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.848274 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20b7f175-32b1-486b-b6c0-8c12a6ad8338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cxw7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cxw7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cxw7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cxw7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cxw7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cxw7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cxw7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:56:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lpnb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.855243 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-xj6nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vw82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:56:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xj6nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.861480 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.861520 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.861529 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.861544 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.861555 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:19Z","lastTransitionTime":"2026-01-21T09:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.866397 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ed9b586-3ebc-4f27-bf76-88b6622745c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T09:55:56Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0121 09:55:56.629591 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 09:55:56.629817 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 09:55:56.631090 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4186488590/tls.crt::/tmp/serving-cert-4186488590/tls.key\\\\\\\"\\\\nI0121 09:55:56.830869 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 09:55:56.834040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 09:55:56.834061 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 09:55:56.834102 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 09:55:56.834108 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 09:55:56.838351 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 09:55:56.838369 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 09:55:56.838374 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 09:55:56.838379 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 09:55:56.838383 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 09:55:56.838387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 09:55:56.838390 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 09:55:56.838539 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 09:55:56.841335 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T09:55:55Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:54:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:54:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:54:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.878288 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.888378 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"766a5e24-f953-49f2-b732-1a783ea97e3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btx85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:56:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-wkwlv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.893872 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.900949 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-7d4r9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3c35acb-afad-4124-a4e6-bf36f963ecbf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:56:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xlg2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:56:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7d4r9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.901061 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.905095 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.910369 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.917578 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.922884 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7d4r9" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.931035 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" Jan 21 09:56:19 crc kubenswrapper[5119]: W0121 09:56:19.934749 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-cf0a7c7a2265408724d03daea297956cabb6116a6bd2ba198041f587fe2a3234 WatchSource:0}: Error finding container cf0a7c7a2265408724d03daea297956cabb6116a6bd2ba198041f587fe2a3234: Status 404 returned error can't find the container with id cf0a7c7a2265408724d03daea297956cabb6116a6bd2ba198041f587fe2a3234 Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.937271 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.944151 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-42nn8" Jan 21 09:56:19 crc kubenswrapper[5119]: W0121 09:56:19.951530 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3c35acb_afad_4124_a4e6_bf36f963ecbf.slice/crio-15bdf1c6406913eb64c86e8224665fb80fa229b73fe065574813f00e49821d03 WatchSource:0}: Error finding container 15bdf1c6406913eb64c86e8224665fb80fa229b73fe065574813f00e49821d03: Status 404 returned error can't find the container with id 15bdf1c6406913eb64c86e8224665fb80fa229b73fe065574813f00e49821d03 Jan 21 09:56:19 crc kubenswrapper[5119]: W0121 09:56:19.955787 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3a5f299_f5ad_44f1_ba34_8b43da0a6cd5.slice/crio-8014a1bbce1af21c1f664ca126ef2bc00aecaa207dfc1cf6a608437a9be7aa76 WatchSource:0}: Error finding container 8014a1bbce1af21c1f664ca126ef2bc00aecaa207dfc1cf6a608437a9be7aa76: Status 404 returned error can't find the container with id 8014a1bbce1af21c1f664ca126ef2bc00aecaa207dfc1cf6a608437a9be7aa76 Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.956436 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xj6nb" Jan 21 09:56:19 crc kubenswrapper[5119]: W0121 09:56:19.958217 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20b7f175_32b1_486b_b6c0_8c12a6ad8338.slice/crio-23d5cc64d96f4976984b8a66ad81144c10b6587bcc06533fddace729e5bd75d5 WatchSource:0}: Error finding container 23d5cc64d96f4976984b8a66ad81144c10b6587bcc06533fddace729e5bd75d5: Status 404 returned error can't find the container with id 23d5cc64d96f4976984b8a66ad81144c10b6587bcc06533fddace729e5bd75d5 Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.963616 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.963669 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.963683 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.963702 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:19 crc kubenswrapper[5119]: I0121 09:56:19.963743 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:19Z","lastTransitionTime":"2026-01-21T09:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:19 crc kubenswrapper[5119]: W0121 09:56:19.975936 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8726e82a_1e7a_48e2_b1f0_4e34b17b37be.slice/crio-aceefea6105e80341af015245c60666025ce058c3ee6eddabca665eb964a9d2b WatchSource:0}: Error finding container aceefea6105e80341af015245c60666025ce058c3ee6eddabca665eb964a9d2b: Status 404 returned error can't find the container with id aceefea6105e80341af015245c60666025ce058c3ee6eddabca665eb964a9d2b Jan 21 09:56:20 crc kubenswrapper[5119]: W0121 09:56:20.008650 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0eddaf40_46ea_4d13_b78e_a1f4c439795d.slice/crio-46880ae082e5ed0095617e17c62f00dd0513d78018acadf3ff3bf19770dffead WatchSource:0}: Error finding container 46880ae082e5ed0095617e17c62f00dd0513d78018acadf3ff3bf19770dffead: Status 404 returned error can't find the container with id 46880ae082e5ed0095617e17c62f00dd0513d78018acadf3ff3bf19770dffead Jan 21 09:56:20 crc kubenswrapper[5119]: W0121 09:56:20.010816 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod387b63b2_bff9_43f3_9a3d_0b81aec7f5a7.slice/crio-8b4873e4890eeff4ac9a551e37ec347a6a73d2563c105da77209a7f5dec26ee9 WatchSource:0}: Error finding container 8b4873e4890eeff4ac9a551e37ec347a6a73d2563c105da77209a7f5dec26ee9: Status 404 returned error can't find the container with id 8b4873e4890eeff4ac9a551e37ec347a6a73d2563c105da77209a7f5dec26ee9 Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.070239 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.070476 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.070485 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.070499 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.070508 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:20Z","lastTransitionTime":"2026-01-21T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.174411 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.174485 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.174502 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.174525 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.174541 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:20Z","lastTransitionTime":"2026-01-21T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.193715 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.193820 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.193873 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.193906 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.194042 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.194060 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.194071 5119 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.194126 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:21.194110616 +0000 UTC m=+96.862202294 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.194479 5119 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.194512 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:21.194482486 +0000 UTC m=+96.862574164 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.194565 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:21.194544458 +0000 UTC m=+96.862636176 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.196171 5119 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.196243 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:21.196224433 +0000 UTC m=+96.864316111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.280630 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.280669 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.280682 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.280699 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.280711 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:20Z","lastTransitionTime":"2026-01-21T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.294454 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.294508 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs\") pod \"network-metrics-daemon-fk2f6\" (UID: \"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\") " pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.294615 5119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.294664 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs podName:0e481d9e-6dd0-4c5e-bb9a-33546cb7715d nodeName:}" failed. No retries permitted until 2026-01-21 09:56:21.294649753 +0000 UTC m=+96.962741431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs") pod "network-metrics-daemon-fk2f6" (UID: "0e481d9e-6dd0-4c5e-bb9a-33546cb7715d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.295089 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.295114 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.295125 5119 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:20 crc kubenswrapper[5119]: E0121 09:56:20.295161 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:21.295149596 +0000 UTC m=+96.963241274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.382449 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.382527 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.382538 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.382578 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.382592 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:20Z","lastTransitionTime":"2026-01-21T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.484457 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.484499 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.484508 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.484522 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.484532 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:20Z","lastTransitionTime":"2026-01-21T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.587083 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.587122 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.587130 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.587144 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.587154 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:20Z","lastTransitionTime":"2026-01-21T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.593920 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.594763 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.596305 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.597302 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.599058 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.600478 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.601702 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.602851 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.603684 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.604876 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.605690 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.607133 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.607816 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.609440 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.609901 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.610822 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.612462 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.614336 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.615666 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.618542 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.619862 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.622194 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.623139 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.625070 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.626060 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.627679 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.628851 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.630217 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.632402 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.633111 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.638547 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.640153 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.641694 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.643236 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.644028 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.644753 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.646173 5119 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.646382 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.649589 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.651094 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.652560 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.653820 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.654409 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.656432 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.660660 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.661270 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.662219 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.663953 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.664912 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.666251 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.667152 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.669068 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.670423 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.672745 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.675097 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.676370 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.678969 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.680694 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.689866 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.689914 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.689929 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.689944 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.689956 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:20Z","lastTransitionTime":"2026-01-21T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.792099 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.792383 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.792392 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.792404 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.792413 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:20Z","lastTransitionTime":"2026-01-21T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.894757 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.894814 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.894837 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.894858 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.894870 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:20Z","lastTransitionTime":"2026-01-21T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.919077 5119 generic.go:358] "Generic (PLEG): container finished" podID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerID="d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311" exitCode=0 Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.919150 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerDied","Data":"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.919184 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerStarted","Data":"aceefea6105e80341af015245c60666025ce058c3ee6eddabca665eb964a9d2b"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.921013 5119 generic.go:358] "Generic (PLEG): container finished" podID="20b7f175-32b1-486b-b6c0-8c12a6ad8338" containerID="20add2a8d93dc3c0e728a1815a127eb59742f5e8a04cc9f983cd09c2ff8f9b3f" exitCode=0 Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.921082 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" event={"ID":"20b7f175-32b1-486b-b6c0-8c12a6ad8338","Type":"ContainerDied","Data":"20add2a8d93dc3c0e728a1815a127eb59742f5e8a04cc9f983cd09c2ff8f9b3f"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.921115 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" event={"ID":"20b7f175-32b1-486b-b6c0-8c12a6ad8338","Type":"ContainerStarted","Data":"23d5cc64d96f4976984b8a66ad81144c10b6587bcc06533fddace729e5bd75d5"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.921960 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"cf0a7c7a2265408724d03daea297956cabb6116a6bd2ba198041f587fe2a3234"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.927471 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"9c7c1ffeb2d5e7eb870a90195c823720cdcbbd295ebf8a02d61e726b5a0fccdd"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.927518 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"f040ced7ae49c8d669202321233d270909fa7af5fcdc3bdb9c3c50392839cc84"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.927532 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"545ff01957b20694f940f54540e925a9c01da570f7fdeaff2fd01af1d49042b4"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.928520 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xj6nb" event={"ID":"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7","Type":"ContainerStarted","Data":"dd32265eaaf212801d39a45589ba6b965fa128812031400400f115a19ab5766a"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.928549 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xj6nb" event={"ID":"387b63b2-bff9-43f3-9a3d-0b81aec7f5a7","Type":"ContainerStarted","Data":"8b4873e4890eeff4ac9a551e37ec347a6a73d2563c105da77209a7f5dec26ee9"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.929484 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-42nn8" event={"ID":"0eddaf40-46ea-4d13-b78e-a1f4c439795d","Type":"ContainerStarted","Data":"15eda257ecfdec2560046a013e85decdd39d233805452f6ed81e204d15f4a630"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.929509 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-42nn8" event={"ID":"0eddaf40-46ea-4d13-b78e-a1f4c439795d","Type":"ContainerStarted","Data":"46880ae082e5ed0095617e17c62f00dd0513d78018acadf3ff3bf19770dffead"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.930797 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7d4r9" event={"ID":"c3c35acb-afad-4124-a4e6-bf36f963ecbf","Type":"ContainerStarted","Data":"312f4cc68d22ceb0482ea69403845198dce304a803e2deb6620de418d8dc6b35"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.930829 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7d4r9" event={"ID":"c3c35acb-afad-4124-a4e6-bf36f963ecbf","Type":"ContainerStarted","Data":"15bdf1c6406913eb64c86e8224665fb80fa229b73fe065574813f00e49821d03"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.940090 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" event={"ID":"766a5e24-f953-49f2-b732-1a783ea97e3f","Type":"ContainerStarted","Data":"5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.940141 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" event={"ID":"766a5e24-f953-49f2-b732-1a783ea97e3f","Type":"ContainerStarted","Data":"f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.940153 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" event={"ID":"766a5e24-f953-49f2-b732-1a783ea97e3f","Type":"ContainerStarted","Data":"7647709b703c341792cbbbf669bdafc71596df1ac074149af0a2d16bb250099b"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.942401 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"1dff479f253613c9c57c784e29c215883a24162ae52039a11ac03ec965e64d13"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.942437 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"d40e23f49b5ccbd79a7b7631bfb1923bda39e2fe75a27231461f8d5e6aec28b1"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.942448 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"8014a1bbce1af21c1f664ca126ef2bc00aecaa207dfc1cf6a608437a9be7aa76"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.944396 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"7b89157e65c21142a1e4ace42f5f0499690a86e42215d60532192f3918ec4841"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.944431 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"ae6f2a2e75652ffe2ed534c498a50e61ad728a606a399e79b0347a25ece37af9"} Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.997687 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.997718 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.997727 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.997739 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:20 crc kubenswrapper[5119]: I0121 09:56:20.997747 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:20Z","lastTransitionTime":"2026-01-21T09:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.102336 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=2.102318355 podStartE2EDuration="2.102318355s" podCreationTimestamp="2026-01-21 09:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:21.091492356 +0000 UTC m=+96.759584034" watchObservedRunningTime="2026-01-21 09:56:21.102318355 +0000 UTC m=+96.770410033" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.102634 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.102759 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.102772 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.102786 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.102796 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:21Z","lastTransitionTime":"2026-01-21T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.196523 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.195754903 podStartE2EDuration="2.195754903s" podCreationTimestamp="2026-01-21 09:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:21.193364949 +0000 UTC m=+96.861456627" watchObservedRunningTime="2026-01-21 09:56:21.195754903 +0000 UTC m=+96.863846581" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.204327 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.204369 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.204380 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.204394 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.204406 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:21Z","lastTransitionTime":"2026-01-21T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.209134 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.209304 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.209342 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.209388 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.209494 5119 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.209548 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:23.209531771 +0000 UTC m=+98.877623449 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.209588 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.209574602 podStartE2EDuration="2.209574602s" podCreationTimestamp="2026-01-21 09:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:21.207920847 +0000 UTC m=+96.876012525" watchObservedRunningTime="2026-01-21 09:56:21.209574602 +0000 UTC m=+96.877666280" Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.209881 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:23.2098709 +0000 UTC m=+98.877962588 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.209920 5119 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.209947 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:23.209939812 +0000 UTC m=+98.878031500 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.210006 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.210016 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.210026 5119 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.210050 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:23.210043024 +0000 UTC m=+98.878134702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.230622 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.230589133 podStartE2EDuration="2.230589133s" podCreationTimestamp="2026-01-21 09:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:21.219433685 +0000 UTC m=+96.887525363" watchObservedRunningTime="2026-01-21 09:56:21.230589133 +0000 UTC m=+96.898680811" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.305758 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.306102 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.306115 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.306134 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.306147 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:21Z","lastTransitionTime":"2026-01-21T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.310073 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.310120 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs\") pod \"network-metrics-daemon-fk2f6\" (UID: \"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\") " pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.310233 5119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.310299 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs podName:0e481d9e-6dd0-4c5e-bb9a-33546cb7715d nodeName:}" failed. No retries permitted until 2026-01-21 09:56:23.310278123 +0000 UTC m=+98.978369811 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs") pod "network-metrics-daemon-fk2f6" (UID: "0e481d9e-6dd0-4c5e-bb9a-33546cb7715d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.310588 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.310629 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.310641 5119 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.310702 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:23.310677964 +0000 UTC m=+98.978769642 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.315137 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" podStartSLOduration=77.315121062 podStartE2EDuration="1m17.315121062s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:21.301425316 +0000 UTC m=+96.969516994" watchObservedRunningTime="2026-01-21 09:56:21.315121062 +0000 UTC m=+96.983212740" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.327833 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7d4r9" podStartSLOduration=77.327814101 podStartE2EDuration="1m17.327814101s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:21.315266946 +0000 UTC m=+96.983358624" watchObservedRunningTime="2026-01-21 09:56:21.327814101 +0000 UTC m=+96.995905789" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.338273 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podStartSLOduration=77.33825884 podStartE2EDuration="1m17.33825884s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:21.337778207 +0000 UTC m=+97.005869905" watchObservedRunningTime="2026-01-21 09:56:21.33825884 +0000 UTC m=+97.006350518" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.350699 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-42nn8" podStartSLOduration=77.350684322 podStartE2EDuration="1m17.350684322s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:21.350631631 +0000 UTC m=+97.018723309" watchObservedRunningTime="2026-01-21 09:56:21.350684322 +0000 UTC m=+97.018776000" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.388943 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xj6nb" podStartSLOduration=77.388928885 podStartE2EDuration="1m17.388928885s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:21.388789381 +0000 UTC m=+97.056881079" watchObservedRunningTime="2026-01-21 09:56:21.388928885 +0000 UTC m=+97.057020563" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.407711 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.407755 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.407767 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.407784 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.407797 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:21Z","lastTransitionTime":"2026-01-21T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.509988 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.510020 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.510028 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.510040 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.510049 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:21Z","lastTransitionTime":"2026-01-21T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.590921 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.590952 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.590921 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.590964 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.591060 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fk2f6" podUID="0e481d9e-6dd0-4c5e-bb9a-33546cb7715d" Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.591112 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.591162 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:56:21 crc kubenswrapper[5119]: E0121 09:56:21.591203 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.612087 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.612131 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.612143 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.612162 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.612174 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:21Z","lastTransitionTime":"2026-01-21T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.714102 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.714137 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.714146 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.714160 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.714169 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:21Z","lastTransitionTime":"2026-01-21T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.815460 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.815497 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.815507 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.815519 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.815528 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:21Z","lastTransitionTime":"2026-01-21T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.917384 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.917424 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.917433 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.917447 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.917456 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:21Z","lastTransitionTime":"2026-01-21T09:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.950669 5119 generic.go:358] "Generic (PLEG): container finished" podID="20b7f175-32b1-486b-b6c0-8c12a6ad8338" containerID="95cda9ddaeed9d0b821534f9bf7b4e9859fc9736176a237bd66e130471fbe02c" exitCode=0 Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.950769 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" event={"ID":"20b7f175-32b1-486b-b6c0-8c12a6ad8338","Type":"ContainerDied","Data":"95cda9ddaeed9d0b821534f9bf7b4e9859fc9736176a237bd66e130471fbe02c"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.954478 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerStarted","Data":"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.954569 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerStarted","Data":"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.954586 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerStarted","Data":"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.954598 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerStarted","Data":"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.954627 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerStarted","Data":"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8"} Jan 21 09:56:21 crc kubenswrapper[5119]: I0121 09:56:21.954641 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerStarted","Data":"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459"} Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.019231 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.019271 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.019279 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.019293 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.019302 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:22Z","lastTransitionTime":"2026-01-21T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.121470 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.121517 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.121531 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.121550 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.121562 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:22Z","lastTransitionTime":"2026-01-21T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.223812 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.224113 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.224123 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.224136 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.224145 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:22Z","lastTransitionTime":"2026-01-21T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.326226 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.326290 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.326312 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.326341 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.326359 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:22Z","lastTransitionTime":"2026-01-21T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.428498 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.428530 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.428539 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.428552 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.428561 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:22Z","lastTransitionTime":"2026-01-21T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.530940 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.530992 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.531011 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.531033 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.531052 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:22Z","lastTransitionTime":"2026-01-21T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.633366 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.633444 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.633469 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.633499 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.633524 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:22Z","lastTransitionTime":"2026-01-21T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.735989 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.736065 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.736094 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.736127 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.736152 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:22Z","lastTransitionTime":"2026-01-21T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.839003 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.839072 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.839090 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.839115 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.839135 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:22Z","lastTransitionTime":"2026-01-21T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.940822 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.940882 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.940901 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.940925 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.940942 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:22Z","lastTransitionTime":"2026-01-21T09:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.964880 5119 generic.go:358] "Generic (PLEG): container finished" podID="20b7f175-32b1-486b-b6c0-8c12a6ad8338" containerID="dc361690d1893ccc0392bd9d259292654d402d5e100859728541d6f394951feb" exitCode=0 Jan 21 09:56:22 crc kubenswrapper[5119]: I0121 09:56:22.964971 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" event={"ID":"20b7f175-32b1-486b-b6c0-8c12a6ad8338","Type":"ContainerDied","Data":"dc361690d1893ccc0392bd9d259292654d402d5e100859728541d6f394951feb"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.043414 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.043448 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.043457 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.043470 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.043479 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:23Z","lastTransitionTime":"2026-01-21T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.145756 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.145805 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.145818 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.145840 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.145856 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:23Z","lastTransitionTime":"2026-01-21T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.230457 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.230590 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.230648 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.230699 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.230785 5119 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.230843 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:27.230828663 +0000 UTC m=+102.898920361 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.230907 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:27.230898325 +0000 UTC m=+102.898990013 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.230986 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.230998 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.231009 5119 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.231036 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:27.231028799 +0000 UTC m=+102.899120487 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.231084 5119 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.231108 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:27.23110142 +0000 UTC m=+102.899193108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.249992 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.250086 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.250103 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.250158 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.250181 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:23Z","lastTransitionTime":"2026-01-21T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.331342 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.331397 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs\") pod \"network-metrics-daemon-fk2f6\" (UID: \"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\") " pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.331509 5119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.331573 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs podName:0e481d9e-6dd0-4c5e-bb9a-33546cb7715d nodeName:}" failed. No retries permitted until 2026-01-21 09:56:27.331555505 +0000 UTC m=+102.999647193 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs") pod "network-metrics-daemon-fk2f6" (UID: "0e481d9e-6dd0-4c5e-bb9a-33546cb7715d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.331631 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.331673 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.331696 5119 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.331767 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:27.33174735 +0000 UTC m=+102.999839058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.353308 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.353360 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.353369 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.353380 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.353389 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:23Z","lastTransitionTime":"2026-01-21T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.455454 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.455493 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.455506 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.455521 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.455533 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:23Z","lastTransitionTime":"2026-01-21T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.558182 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.558227 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.558240 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.558257 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.558270 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:23Z","lastTransitionTime":"2026-01-21T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.589859 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.589913 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.589921 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.589981 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.590044 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fk2f6" podUID="0e481d9e-6dd0-4c5e-bb9a-33546cb7715d" Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.590214 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.590225 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:23 crc kubenswrapper[5119]: E0121 09:56:23.590404 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.661371 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.661432 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.661449 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.661473 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.661492 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:23Z","lastTransitionTime":"2026-01-21T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.763458 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.763512 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.763530 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.763552 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.763569 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:23Z","lastTransitionTime":"2026-01-21T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.866325 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.866390 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.866408 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.866432 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.866449 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:23Z","lastTransitionTime":"2026-01-21T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.968042 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.968094 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.968107 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.968128 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.968143 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:23Z","lastTransitionTime":"2026-01-21T09:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.973638 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerStarted","Data":"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.976280 5119 generic.go:358] "Generic (PLEG): container finished" podID="20b7f175-32b1-486b-b6c0-8c12a6ad8338" containerID="8bc8ceeede31925f06ec6d32b4cfb14b19bf0ab0d369b7fa427db8e12b5c71e0" exitCode=0 Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.976362 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" event={"ID":"20b7f175-32b1-486b-b6c0-8c12a6ad8338","Type":"ContainerDied","Data":"8bc8ceeede31925f06ec6d32b4cfb14b19bf0ab0d369b7fa427db8e12b5c71e0"} Jan 21 09:56:23 crc kubenswrapper[5119]: I0121 09:56:23.978476 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"14f14212a058f39e7a6fd4a67dfcd41b75b1ec4200f7caf94b575c520b49c371"} Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.070492 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.070546 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.070558 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.070578 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.070591 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:24Z","lastTransitionTime":"2026-01-21T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.172895 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.172942 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.172953 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.172993 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.173009 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:24Z","lastTransitionTime":"2026-01-21T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.264835 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.264873 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.264884 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.264897 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.264906 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:24Z","lastTransitionTime":"2026-01-21T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.285404 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.285438 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.285447 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.285460 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.285469 5119 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:56:24Z","lastTransitionTime":"2026-01-21T09:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.319009 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s"] Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.322388 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.328228 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.328450 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.328569 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.331260 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.444250 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.444303 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.444369 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.444394 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.444494 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.545727 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.546054 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.546126 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.546143 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.546158 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.546246 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.546313 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.547304 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.557879 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.569059 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-phn6s\" (UID: \"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.611841 5119 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.620799 5119 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.634975 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" Jan 21 09:56:24 crc kubenswrapper[5119]: W0121 09:56:24.651213 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c2795f8_337b_4c0e_9bfd_0c98b6ca01a9.slice/crio-4757c54961e7c6898c71b882e29db9226553abb0d268694c49d5fb1d2cca7ef6 WatchSource:0}: Error finding container 4757c54961e7c6898c71b882e29db9226553abb0d268694c49d5fb1d2cca7ef6: Status 404 returned error can't find the container with id 4757c54961e7c6898c71b882e29db9226553abb0d268694c49d5fb1d2cca7ef6 Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.983279 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" event={"ID":"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9","Type":"ContainerStarted","Data":"e0c6656a5adb1c6487d8979e916de951546a8149d55553ae83d9dcec106ca3e5"} Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.983338 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" event={"ID":"4c2795f8-337b-4c0e-9bfd-0c98b6ca01a9","Type":"ContainerStarted","Data":"4757c54961e7c6898c71b882e29db9226553abb0d268694c49d5fb1d2cca7ef6"} Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.987036 5119 generic.go:358] "Generic (PLEG): container finished" podID="20b7f175-32b1-486b-b6c0-8c12a6ad8338" containerID="3d45ba3ff33dfa041fbdc5251715aa3b04cadac85308105be71eb8431a5a3918" exitCode=0 Jan 21 09:56:24 crc kubenswrapper[5119]: I0121 09:56:24.987122 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" event={"ID":"20b7f175-32b1-486b-b6c0-8c12a6ad8338","Type":"ContainerDied","Data":"3d45ba3ff33dfa041fbdc5251715aa3b04cadac85308105be71eb8431a5a3918"} Jan 21 09:56:25 crc kubenswrapper[5119]: I0121 09:56:25.025275 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-phn6s" podStartSLOduration=81.025256184 podStartE2EDuration="1m21.025256184s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:25.002513907 +0000 UTC m=+100.670605585" watchObservedRunningTime="2026-01-21 09:56:25.025256184 +0000 UTC m=+100.693347852" Jan 21 09:56:25 crc kubenswrapper[5119]: I0121 09:56:25.590960 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:25 crc kubenswrapper[5119]: E0121 09:56:25.591153 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fk2f6" podUID="0e481d9e-6dd0-4c5e-bb9a-33546cb7715d" Jan 21 09:56:25 crc kubenswrapper[5119]: I0121 09:56:25.591298 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:25 crc kubenswrapper[5119]: E0121 09:56:25.591378 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:56:25 crc kubenswrapper[5119]: I0121 09:56:25.591440 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:25 crc kubenswrapper[5119]: E0121 09:56:25.591491 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:56:25 crc kubenswrapper[5119]: I0121 09:56:25.591553 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:25 crc kubenswrapper[5119]: E0121 09:56:25.591637 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:56:25 crc kubenswrapper[5119]: I0121 09:56:25.992496 5119 generic.go:358] "Generic (PLEG): container finished" podID="20b7f175-32b1-486b-b6c0-8c12a6ad8338" containerID="e97af5e260bade6a1b299eec17dcd9009f13cb09ff5ea71e837fbd146207e05f" exitCode=0 Jan 21 09:56:25 crc kubenswrapper[5119]: I0121 09:56:25.992545 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" event={"ID":"20b7f175-32b1-486b-b6c0-8c12a6ad8338","Type":"ContainerDied","Data":"e97af5e260bade6a1b299eec17dcd9009f13cb09ff5ea71e837fbd146207e05f"} Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.005344 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerStarted","Data":"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546"} Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.005843 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.005866 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.036520 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" podStartSLOduration=83.036505159 podStartE2EDuration="1m23.036505159s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:27.031736431 +0000 UTC m=+102.699828129" watchObservedRunningTime="2026-01-21 09:56:27.036505159 +0000 UTC m=+102.704596837" Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.049123 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.280818 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.280930 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.280987 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.2809531 +0000 UTC m=+110.949044788 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.281002 5119 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.281047 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.281062 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.281047083 +0000 UTC m=+110.949138761 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.281130 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.281190 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.281207 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.281221 5119 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.281238 5119 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.281293 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.281282409 +0000 UTC m=+110.949374087 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.281313 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.28130386 +0000 UTC m=+110.949395668 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.381727 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.381771 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs\") pod \"network-metrics-daemon-fk2f6\" (UID: \"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\") " pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.381902 5119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.381930 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.381954 5119 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.381966 5119 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.381980 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs podName:0e481d9e-6dd0-4c5e-bb9a-33546cb7715d nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.381958549 +0000 UTC m=+111.050050227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs") pod "network-metrics-daemon-fk2f6" (UID: "0e481d9e-6dd0-4c5e-bb9a-33546cb7715d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.382027 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.382005931 +0000 UTC m=+111.050097669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.590505 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.590593 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.590601 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.590721 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.590755 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.590810 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:56:27 crc kubenswrapper[5119]: I0121 09:56:27.590511 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:27 crc kubenswrapper[5119]: E0121 09:56:27.590927 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fk2f6" podUID="0e481d9e-6dd0-4c5e-bb9a-33546cb7715d" Jan 21 09:56:28 crc kubenswrapper[5119]: I0121 09:56:28.011824 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" event={"ID":"20b7f175-32b1-486b-b6c0-8c12a6ad8338","Type":"ContainerStarted","Data":"ce5aec6e0a24dec9e205ed1c46372e84e170eb83094a575280a7b4035632d918"} Jan 21 09:56:28 crc kubenswrapper[5119]: I0121 09:56:28.012855 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:28 crc kubenswrapper[5119]: I0121 09:56:28.037285 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:56:28 crc kubenswrapper[5119]: I0121 09:56:28.070455 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lpnb6" podStartSLOduration=84.070440847 podStartE2EDuration="1m24.070440847s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:28.044921235 +0000 UTC m=+103.713012913" watchObservedRunningTime="2026-01-21 09:56:28.070440847 +0000 UTC m=+103.738532525" Jan 21 09:56:28 crc kubenswrapper[5119]: I0121 09:56:28.653625 5119 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:56:28 crc kubenswrapper[5119]: I0121 09:56:28.908472 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fk2f6"] Jan 21 09:56:28 crc kubenswrapper[5119]: I0121 09:56:28.908600 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:28 crc kubenswrapper[5119]: E0121 09:56:28.908703 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fk2f6" podUID="0e481d9e-6dd0-4c5e-bb9a-33546cb7715d" Jan 21 09:56:29 crc kubenswrapper[5119]: I0121 09:56:29.590896 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:29 crc kubenswrapper[5119]: I0121 09:56:29.590956 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:29 crc kubenswrapper[5119]: I0121 09:56:29.591003 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:29 crc kubenswrapper[5119]: E0121 09:56:29.591574 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:56:29 crc kubenswrapper[5119]: E0121 09:56:29.592010 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:56:29 crc kubenswrapper[5119]: E0121 09:56:29.591844 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:56:30 crc kubenswrapper[5119]: I0121 09:56:30.590111 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:30 crc kubenswrapper[5119]: E0121 09:56:30.590578 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fk2f6" podUID="0e481d9e-6dd0-4c5e-bb9a-33546cb7715d" Jan 21 09:56:31 crc kubenswrapper[5119]: I0121 09:56:31.589908 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:31 crc kubenswrapper[5119]: I0121 09:56:31.589954 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:31 crc kubenswrapper[5119]: E0121 09:56:31.590060 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:56:31 crc kubenswrapper[5119]: E0121 09:56:31.590468 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:56:31 crc kubenswrapper[5119]: I0121 09:56:31.590685 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:31 crc kubenswrapper[5119]: I0121 09:56:31.590777 5119 scope.go:117] "RemoveContainer" containerID="af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8" Jan 21 09:56:31 crc kubenswrapper[5119]: E0121 09:56:31.590930 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:56:31 crc kubenswrapper[5119]: E0121 09:56:31.590995 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:56:32 crc kubenswrapper[5119]: I0121 09:56:32.590047 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:32 crc kubenswrapper[5119]: E0121 09:56:32.590338 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fk2f6" podUID="0e481d9e-6dd0-4c5e-bb9a-33546cb7715d" Jan 21 09:56:33 crc kubenswrapper[5119]: I0121 09:56:33.463754 5119 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 21 09:56:33 crc kubenswrapper[5119]: I0121 09:56:33.463967 5119 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 21 09:56:33 crc kubenswrapper[5119]: I0121 09:56:33.527429 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-6kqm2"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.429272 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.429448 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.429920 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.434438 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.436768 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.436789 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.437036 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.437086 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.437132 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.437355 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.437560 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.437854 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.438308 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.438504 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.438995 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.439195 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.442607 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.443085 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.443102 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.444241 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.444342 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.444546 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-gngm4"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.444654 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.444871 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.445024 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.445086 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.445164 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.445842 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.449249 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.449301 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wrb86"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.450150 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.450729 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.450833 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.450924 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.453722 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.453872 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.455585 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.455736 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.456000 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.456010 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.456198 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.456738 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.456910 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.457694 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.459248 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.459939 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.460030 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.460080 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.460394 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.460516 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.460636 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.460713 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.461005 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.461074 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.461144 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.461292 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-dpf2h"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.461370 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.465454 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.467268 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.469031 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.469083 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.469504 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.469360 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-94gcl"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.470492 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.470540 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.472863 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.473233 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.474164 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.474425 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.475144 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.475144 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.475318 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.480064 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.480415 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.481994 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.482909 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.482946 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.482983 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.483397 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.483519 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.484085 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.484389 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.484514 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.487140 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7tls5"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.487845 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.490510 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.490551 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.490894 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.491126 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.491226 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.493262 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.493854 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.495282 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.495670 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.496121 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.497061 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.497312 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.497518 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.497807 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.498706 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.499260 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.499580 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-cn8mz"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.499768 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.502074 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.502914 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.503438 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.503659 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.503666 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.503780 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.503938 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.504026 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.503898 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.504129 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.504195 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-8nd58"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.504368 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.504456 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.504851 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.505761 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.505855 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.506097 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.507703 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-jvrpf"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.507928 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.509283 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.509597 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.509729 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.509777 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.509793 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.509919 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.510100 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.512740 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.512890 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.513443 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.519609 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.520372 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.521125 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.521875 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.526396 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-6jw94"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.526713 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.531965 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.532215 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.536844 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-prvwt"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.536971 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.539851 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.552431 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-6jw94" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.552498 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-prvwt" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.555009 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.555372 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.558889 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.559009 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.565016 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-installation-pull-secrets\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.565136 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-2glqc"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.565236 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-bound-sa-token\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.565420 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-ca-trust-extracted\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.565538 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.565656 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/986b7816-8325-48f6-b5a5-2d51c9f31687-encryption-config\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.565813 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-images\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.566160 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fhws\" (UniqueName: \"kubernetes.io/projected/19471fb3-19b2-42d4-967e-6b0620f686ce-kube-api-access-4fhws\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.565240 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.567052 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpxk\" (UniqueName: \"kubernetes.io/projected/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-kube-api-access-pmpxk\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.572155 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.573776 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986b7816-8325-48f6-b5a5-2d51c9f31687-serving-cert\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.573818 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-certificates\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.573848 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/19471fb3-19b2-42d4-967e-6b0620f686ce-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.573901 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-image-import-ca\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.573922 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65fa8c5a-91c4-411a-9586-2f893dfda634-config\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.573943 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65fa8c5a-91c4-411a-9586-2f893dfda634-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.573970 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49421ea1-6e3f-41b1-b0d6-e821cac2f8ab-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-pxxr9\" (UID: \"49421ea1-6e3f-41b1-b0d6-e821cac2f8ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.573995 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-client-ca\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574017 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574039 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/986b7816-8325-48f6-b5a5-2d51c9f31687-audit-dir\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574068 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9202c0b0-32fd-49a9-85ce-98c79744bfcf-available-featuregates\") pod \"openshift-config-operator-5777786469-dpf2h\" (UID: \"9202c0b0-32fd-49a9-85ce-98c79744bfcf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574105 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-config\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574135 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/986b7816-8325-48f6-b5a5-2d51c9f31687-etcd-client\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574191 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9pql\" (UniqueName: \"kubernetes.io/projected/49421ea1-6e3f-41b1-b0d6-e821cac2f8ab-kube-api-access-n9pql\") pod \"openshift-apiserver-operator-846cbfc458-pxxr9\" (UID: \"49421ea1-6e3f-41b1-b0d6-e821cac2f8ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574232 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8qvc\" (UniqueName: \"kubernetes.io/projected/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-kube-api-access-s8qvc\") pod \"collect-profiles-29483145-q8nfm\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574253 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-tls\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574276 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65fa8c5a-91c4-411a-9586-2f893dfda634-serving-cert\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574303 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-config\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574324 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d762n\" (UniqueName: \"kubernetes.io/projected/986b7816-8325-48f6-b5a5-2d51c9f31687-kube-api-access-d762n\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574346 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fzp5\" (UniqueName: \"kubernetes.io/projected/bfe4596e-cffd-4e61-b095-455eea1ed712-kube-api-access-5fzp5\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574379 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee0294ff-f61f-492b-b738-fbbee8f757eb-tmp\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574400 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvrkq\" (UniqueName: \"kubernetes.io/projected/d3c70e39-bf38-42a7-b579-ed17a163a5b1-kube-api-access-xvrkq\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574427 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clhbh\" (UniqueName: \"kubernetes.io/projected/ee0294ff-f61f-492b-b738-fbbee8f757eb-kube-api-access-clhbh\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574446 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9202c0b0-32fd-49a9-85ce-98c79744bfcf-serving-cert\") pod \"openshift-config-operator-5777786469-dpf2h\" (UID: \"9202c0b0-32fd-49a9-85ce-98c79744bfcf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574471 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/986b7816-8325-48f6-b5a5-2d51c9f31687-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574491 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19471fb3-19b2-42d4-967e-6b0620f686ce-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574525 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65fa8c5a-91c4-411a-9586-2f893dfda634-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574551 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-config-volume\") pod \"collect-profiles-29483145-q8nfm\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574571 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bfe4596e-cffd-4e61-b095-455eea1ed712-machine-approver-tls\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574640 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574706 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19471fb3-19b2-42d4-967e-6b0620f686ce-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574749 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-client-ca\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574771 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3c70e39-bf38-42a7-b579-ed17a163a5b1-serving-cert\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574853 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574873 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-config\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574899 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-config\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574923 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-secret-volume\") pod \"collect-profiles-29483145-q8nfm\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574943 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-trusted-ca\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574970 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3c70e39-bf38-42a7-b579-ed17a163a5b1-tmp\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.574989 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkmpg\" (UniqueName: \"kubernetes.io/projected/65fa8c5a-91c4-411a-9586-2f893dfda634-kube-api-access-hkmpg\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.575025 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.575049 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49421ea1-6e3f-41b1-b0d6-e821cac2f8ab-config\") pod \"openshift-apiserver-operator-846cbfc458-pxxr9\" (UID: \"49421ea1-6e3f-41b1-b0d6-e821cac2f8ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.575079 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqwbp\" (UniqueName: \"kubernetes.io/projected/9202c0b0-32fd-49a9-85ce-98c79744bfcf-kube-api-access-wqwbp\") pod \"openshift-config-operator-5777786469-dpf2h\" (UID: \"9202c0b0-32fd-49a9-85ce-98c79744bfcf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.575101 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe4596e-cffd-4e61-b095-455eea1ed712-config\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.575134 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee0294ff-f61f-492b-b738-fbbee8f757eb-serving-cert\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.575157 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-audit\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.575178 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvrbm\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-kube-api-access-lvrbm\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.575198 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfe4596e-cffd-4e61-b095-455eea1ed712-auth-proxy-config\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.575464 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.575505 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.575816 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: E0121 09:56:34.575974 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.075957897 +0000 UTC m=+110.744049575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.578570 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.579564 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.580917 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.581186 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.583221 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.583329 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.586441 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.586539 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.591651 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.592743 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.597294 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.602192 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.603319 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.609658 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.609802 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.612138 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.631901 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.652500 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.672107 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.675836 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676010 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f4fe1ed8-46ec-4253-8371-144cad3c3573-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:34 crc kubenswrapper[5119]: E0121 09:56:34.676036 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.176013761 +0000 UTC m=+110.844105439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676087 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49421ea1-6e3f-41b1-b0d6-e821cac2f8ab-config\") pod \"openshift-apiserver-operator-846cbfc458-pxxr9\" (UID: \"49421ea1-6e3f-41b1-b0d6-e821cac2f8ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676113 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676133 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/36ec96a0-85cc-4757-ac20-cff015ffbe19-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676150 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/333793e1-92de-4fbe-83b5-26b64848c6af-service-ca-bundle\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676165 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-trusted-ca-bundle\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676223 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqwbp\" (UniqueName: \"kubernetes.io/projected/9202c0b0-32fd-49a9-85ce-98c79744bfcf-kube-api-access-wqwbp\") pod \"openshift-config-operator-5777786469-dpf2h\" (UID: \"9202c0b0-32fd-49a9-85ce-98c79744bfcf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676253 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe4596e-cffd-4e61-b095-455eea1ed712-config\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676277 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09592b3a-cb47-43ee-97e7-f058888af3ff-config\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676297 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/315dcf5a-c0ec-4778-9118-2f68422fcc17-etcd-client\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676321 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lvrbm\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-kube-api-access-lvrbm\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676339 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-audit\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676354 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfe4596e-cffd-4e61-b095-455eea1ed712-auth-proxy-config\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676370 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6169753a-a446-4d39-85c2-01422f667bde-tmpfs\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676389 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a1c5db5b-e8c1-4d79-aca9-10703c8e82db-signing-cabundle\") pod \"service-ca-74545575db-6jw94\" (UID: \"a1c5db5b-e8c1-4d79-aca9-10703c8e82db\") " pod="openshift-service-ca/service-ca-74545575db-6jw94" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676440 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-ca-trust-extracted\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676462 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b88913b0-a37a-46b4-9c43-4e2e22f306d5-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nwbtj\" (UID: \"b88913b0-a37a-46b4-9c43-4e2e22f306d5\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676513 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jx77\" (UniqueName: \"kubernetes.io/projected/fab52538-cd8b-408e-b571-f2dc516dc2a3-kube-api-access-5jx77\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676600 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c82a029-666a-49b5-8c4c-e8956a23303a-serving-cert\") pod \"service-ca-operator-5b9c976747-vkvvq\" (UID: \"4c82a029-666a-49b5-8c4c-e8956a23303a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676634 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-serving-cert\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676658 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676677 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/986b7816-8325-48f6-b5a5-2d51c9f31687-encryption-config\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676698 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4fhws\" (UniqueName: \"kubernetes.io/projected/19471fb3-19b2-42d4-967e-6b0620f686ce-kube-api-access-4fhws\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676715 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-trusted-ca-bundle\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676731 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pmpxk\" (UniqueName: \"kubernetes.io/projected/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-kube-api-access-pmpxk\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676748 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-certificates\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676762 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/19471fb3-19b2-42d4-967e-6b0620f686ce-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676780 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhvsd\" (UniqueName: \"kubernetes.io/projected/d758cf9c-d67a-46df-a626-14e4a6a92be8-kube-api-access-qhvsd\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676796 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36ec96a0-85cc-4757-ac20-cff015ffbe19-tmp\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676812 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65fa8c5a-91c4-411a-9586-2f893dfda634-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676828 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49421ea1-6e3f-41b1-b0d6-e821cac2f8ab-config\") pod \"openshift-apiserver-operator-846cbfc458-pxxr9\" (UID: \"49421ea1-6e3f-41b1-b0d6-e821cac2f8ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676831 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315dcf5a-c0ec-4778-9118-2f68422fcc17-config\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676877 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/986b7816-8325-48f6-b5a5-2d51c9f31687-audit-dir\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.676885 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-ca-trust-extracted\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.677089 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-console-config\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.677143 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-config\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.677175 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-service-ca\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.677198 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-tmp-dir\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.677384 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/986b7816-8325-48f6-b5a5-2d51c9f31687-audit-dir\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.677445 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-audit\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.677477 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfe4596e-cffd-4e61-b095-455eea1ed712-auth-proxy-config\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.677686 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/986b7816-8325-48f6-b5a5-2d51c9f31687-etcd-client\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.677717 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a1c5db5b-e8c1-4d79-aca9-10703c8e82db-signing-key\") pod \"service-ca-74545575db-6jw94\" (UID: \"a1c5db5b-e8c1-4d79-aca9-10703c8e82db\") " pod="openshift-service-ca/service-ca-74545575db-6jw94" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.677844 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-config\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.677933 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/36ec96a0-85cc-4757-ac20-cff015ffbe19-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678061 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-config\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678099 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d762n\" (UniqueName: \"kubernetes.io/projected/986b7816-8325-48f6-b5a5-2d51c9f31687-kube-api-access-d762n\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678123 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-webhook-cert\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678149 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-dir\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678505 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wvbg\" (UniqueName: \"kubernetes.io/projected/d91bd19e-1fee-475c-a8ff-ee1014086695-kube-api-access-9wvbg\") pod \"downloads-747b44746d-prvwt\" (UID: \"d91bd19e-1fee-475c-a8ff-ee1014086695\") " pod="openshift-console/downloads-747b44746d-prvwt" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678533 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/315dcf5a-c0ec-4778-9118-2f68422fcc17-tmp-dir\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678561 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9202c0b0-32fd-49a9-85ce-98c79744bfcf-serving-cert\") pod \"openshift-config-operator-5777786469-dpf2h\" (UID: \"9202c0b0-32fd-49a9-85ce-98c79744bfcf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678591 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-policies\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678634 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678662 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/986b7816-8325-48f6-b5a5-2d51c9f31687-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678687 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d758cf9c-d67a-46df-a626-14e4a6a92be8-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678708 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678735 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678763 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/315dcf5a-c0ec-4778-9118-2f68422fcc17-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678783 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-serving-cert\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678809 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdlz\" (UniqueName: \"kubernetes.io/projected/f4fe1ed8-46ec-4253-8371-144cad3c3573-kube-api-access-cqdlz\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678831 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-trusted-ca\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678855 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q6ds\" (UniqueName: \"kubernetes.io/projected/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-kube-api-access-7q6ds\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678880 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-audit-dir\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678905 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-config-volume\") pod \"collect-profiles-29483145-q8nfm\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678928 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bfe4596e-cffd-4e61-b095-455eea1ed712-machine-approver-tls\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678950 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-encryption-config\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.678973 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fab52538-cd8b-408e-b571-f2dc516dc2a3-console-serving-cert\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679021 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679048 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f4fe1ed8-46ec-4253-8371-144cad3c3573-srv-cert\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679072 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-apiservice-cert\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679112 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-client-ca\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679135 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679159 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679187 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-config\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679212 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-secret-volume\") pod \"collect-profiles-29483145-q8nfm\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679234 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-kube-api-access\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679265 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/315dcf5a-c0ec-4778-9118-2f68422fcc17-etcd-ca\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679288 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-serving-cert\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679309 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rkvv\" (UniqueName: \"kubernetes.io/projected/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-kube-api-access-4rkvv\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679331 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8n4\" (UniqueName: \"kubernetes.io/projected/637fb734-7cb1-46f5-a282-438e701620d5-kube-api-access-fl8n4\") pod \"migrator-866fcbc849-j7g7z\" (UID: \"637fb734-7cb1-46f5-a282-438e701620d5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679342 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-certificates\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679366 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679374 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-config\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679391 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36ec96a0-85cc-4757-ac20-cff015ffbe19-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679424 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-etcd-serving-ca\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679477 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-config\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679514 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l68lw\" (UniqueName: \"kubernetes.io/projected/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-kube-api-access-l68lw\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679561 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/986b7816-8325-48f6-b5a5-2d51c9f31687-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679595 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7cqg\" (UniqueName: \"kubernetes.io/projected/cec067a0-6e27-4e3f-b03a-f37ffd10dd43-kube-api-access-l7cqg\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dc2k6\" (UID: \"cec067a0-6e27-4e3f-b03a-f37ffd10dd43\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679697 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee0294ff-f61f-492b-b738-fbbee8f757eb-serving-cert\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679751 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-installation-pull-secrets\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679778 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-bound-sa-token\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679815 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-oauth-serving-cert\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679832 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-audit-policies\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679857 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9pbz\" (UniqueName: \"kubernetes.io/projected/197acdb1-438c-41ba-8b8d-a78197486cd7-kube-api-access-q9pbz\") pod \"package-server-manager-77f986bd66-5b4c8\" (UID: \"197acdb1-438c-41ba-8b8d-a78197486cd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679900 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-images\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.679917 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8053129-cc10-477e-b44f-52c846d9d1ce-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680662 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986b7816-8325-48f6-b5a5-2d51c9f31687-serving-cert\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680693 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65fa8c5a-91c4-411a-9586-2f893dfda634-config\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680724 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09592b3a-cb47-43ee-97e7-f058888af3ff-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680749 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f4fe1ed8-46ec-4253-8371-144cad3c3573-tmpfs\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680772 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-config\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680795 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/197acdb1-438c-41ba-8b8d-a78197486cd7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-5b4c8\" (UID: \"197acdb1-438c-41ba-8b8d-a78197486cd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680850 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-image-import-ca\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680874 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b2qp\" (UniqueName: \"kubernetes.io/projected/b88913b0-a37a-46b4-9c43-4e2e22f306d5-kube-api-access-4b2qp\") pod \"machine-config-controller-f9cdd68f7-nwbtj\" (UID: \"b88913b0-a37a-46b4-9c43-4e2e22f306d5\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680897 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8053129-cc10-477e-b44f-52c846d9d1ce-config\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680923 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49421ea1-6e3f-41b1-b0d6-e821cac2f8ab-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-pxxr9\" (UID: \"49421ea1-6e3f-41b1-b0d6-e821cac2f8ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680947 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fab52538-cd8b-408e-b571-f2dc516dc2a3-console-oauth-config\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680968 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-client-ca\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.680989 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.681013 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9202c0b0-32fd-49a9-85ce-98c79744bfcf-available-featuregates\") pod \"openshift-config-operator-5777786469-dpf2h\" (UID: \"9202c0b0-32fd-49a9-85ce-98c79744bfcf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.681035 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.681056 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.681077 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82cwv\" (UniqueName: \"kubernetes.io/projected/a1c5db5b-e8c1-4d79-aca9-10703c8e82db-kube-api-access-82cwv\") pod \"service-ca-74545575db-6jw94\" (UID: \"a1c5db5b-e8c1-4d79-aca9-10703c8e82db\") " pod="openshift-service-ca/service-ca-74545575db-6jw94" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.681096 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-etcd-client\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.681120 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/315dcf5a-c0ec-4778-9118-2f68422fcc17-serving-cert\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.681142 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n9pql\" (UniqueName: \"kubernetes.io/projected/49421ea1-6e3f-41b1-b0d6-e821cac2f8ab-kube-api-access-n9pql\") pod \"openshift-apiserver-operator-846cbfc458-pxxr9\" (UID: \"49421ea1-6e3f-41b1-b0d6-e821cac2f8ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.681207 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-images\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.681846 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/986b7816-8325-48f6-b5a5-2d51c9f31687-etcd-client\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.681972 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-config\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.681975 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/986b7816-8325-48f6-b5a5-2d51c9f31687-encryption-config\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.682227 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.682768 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9202c0b0-32fd-49a9-85ce-98c79744bfcf-available-featuregates\") pod \"openshift-config-operator-5777786469-dpf2h\" (UID: \"9202c0b0-32fd-49a9-85ce-98c79744bfcf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.682779 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/19471fb3-19b2-42d4-967e-6b0620f686ce-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683211 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683284 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-client-ca\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683335 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s8qvc\" (UniqueName: \"kubernetes.io/projected/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-kube-api-access-s8qvc\") pod \"collect-profiles-29483145-q8nfm\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683374 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-tls\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683394 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65fa8c5a-91c4-411a-9586-2f893dfda634-serving-cert\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683413 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chpll\" (UniqueName: \"kubernetes.io/projected/6169753a-a446-4d39-85c2-01422f667bde-kube-api-access-chpll\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683432 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdjfz\" (UniqueName: \"kubernetes.io/projected/333793e1-92de-4fbe-83b5-26b64848c6af-kube-api-access-fdjfz\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683459 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5fzp5\" (UniqueName: \"kubernetes.io/projected/bfe4596e-cffd-4e61-b095-455eea1ed712-kube-api-access-5fzp5\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683464 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-image-import-ca\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683505 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09592b3a-cb47-43ee-97e7-f058888af3ff-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683568 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mmb6\" (UniqueName: \"kubernetes.io/projected/984cb670-8e15-4092-bf42-f3c6337e1cad-kube-api-access-7mmb6\") pod \"kube-storage-version-migrator-operator-565b79b866-4rx9w\" (UID: \"984cb670-8e15-4092-bf42-f3c6337e1cad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683222 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-config-volume\") pod \"collect-profiles-29483145-q8nfm\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.683964 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.684475 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bfe4596e-cffd-4e61-b095-455eea1ed712-machine-approver-tls\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: E0121 09:56:34.684702 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.184688343 +0000 UTC m=+110.852780021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.684982 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9202c0b0-32fd-49a9-85ce-98c79744bfcf-serving-cert\") pod \"openshift-config-operator-5777786469-dpf2h\" (UID: \"9202c0b0-32fd-49a9-85ce-98c79744bfcf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.685210 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8czzd\" (UniqueName: \"kubernetes.io/projected/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-kube-api-access-8czzd\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.685256 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/984cb670-8e15-4092-bf42-f3c6337e1cad-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4rx9w\" (UID: \"984cb670-8e15-4092-bf42-f3c6337e1cad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.685275 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/333793e1-92de-4fbe-83b5-26b64848c6af-metrics-certs\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.685293 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8053129-cc10-477e-b44f-52c846d9d1ce-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.685318 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee0294ff-f61f-492b-b738-fbbee8f757eb-tmp\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.685363 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvrkq\" (UniqueName: \"kubernetes.io/projected/d3c70e39-bf38-42a7-b579-ed17a163a5b1-kube-api-access-xvrkq\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.685383 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cec067a0-6e27-4e3f-b03a-f37ffd10dd43-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dc2k6\" (UID: \"cec067a0-6e27-4e3f-b03a-f37ffd10dd43\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.685766 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee0294ff-f61f-492b-b738-fbbee8f757eb-serving-cert\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.686324 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhn8q\" (UniqueName: \"kubernetes.io/projected/4c82a029-666a-49b5-8c4c-e8956a23303a-kube-api-access-lhn8q\") pod \"service-ca-operator-5b9c976747-vkvvq\" (UID: \"4c82a029-666a-49b5-8c4c-e8956a23303a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.686337 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee0294ff-f61f-492b-b738-fbbee8f757eb-tmp\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.686392 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-clhbh\" (UniqueName: \"kubernetes.io/projected/ee0294ff-f61f-492b-b738-fbbee8f757eb-kube-api-access-clhbh\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.686431 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09592b3a-cb47-43ee-97e7-f058888af3ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.686453 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g92b5\" (UniqueName: \"kubernetes.io/projected/315dcf5a-c0ec-4778-9118-2f68422fcc17-kube-api-access-g92b5\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.686477 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19471fb3-19b2-42d4-967e-6b0620f686ce-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.686521 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a8053129-cc10-477e-b44f-52c846d9d1ce-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.686890 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b88913b0-a37a-46b4-9c43-4e2e22f306d5-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nwbtj\" (UID: \"b88913b0-a37a-46b4-9c43-4e2e22f306d5\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.686917 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.686965 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c82a029-666a-49b5-8c4c-e8956a23303a-config\") pod \"service-ca-operator-5b9c976747-vkvvq\" (UID: \"4c82a029-666a-49b5-8c4c-e8956a23303a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.687044 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65fa8c5a-91c4-411a-9586-2f893dfda634-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.687073 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/984cb670-8e15-4092-bf42-f3c6337e1cad-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4rx9w\" (UID: \"984cb670-8e15-4092-bf42-f3c6337e1cad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.687109 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.687132 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-config\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.687251 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-installation-pull-secrets\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.687266 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-tls\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.687814 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/986b7816-8325-48f6-b5a5-2d51c9f31687-serving-cert\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.687853 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19471fb3-19b2-42d4-967e-6b0620f686ce-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688100 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/986b7816-8325-48f6-b5a5-2d51c9f31687-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688174 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19471fb3-19b2-42d4-967e-6b0620f686ce-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688229 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/333793e1-92de-4fbe-83b5-26b64848c6af-default-certificate\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688382 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3c70e39-bf38-42a7-b579-ed17a163a5b1-serving-cert\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688438 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688460 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688391 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-client-ca\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688485 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-config\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688524 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688542 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688576 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36ec96a0-85cc-4757-ac20-cff015ffbe19-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688597 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4pnb\" (UniqueName: \"kubernetes.io/projected/36ec96a0-85cc-4757-ac20-cff015ffbe19-kube-api-access-d4pnb\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688627 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-trusted-ca\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688657 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d758cf9c-d67a-46df-a626-14e4a6a92be8-images\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688673 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d758cf9c-d67a-46df-a626-14e4a6a92be8-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688701 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3c70e39-bf38-42a7-b579-ed17a163a5b1-tmp\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688720 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hkmpg\" (UniqueName: \"kubernetes.io/projected/65fa8c5a-91c4-411a-9586-2f893dfda634-kube-api-access-hkmpg\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.688736 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/333793e1-92de-4fbe-83b5-26b64848c6af-stats-auth\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.689228 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-secret-volume\") pod \"collect-profiles-29483145-q8nfm\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.689548 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49421ea1-6e3f-41b1-b0d6-e821cac2f8ab-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-pxxr9\" (UID: \"49421ea1-6e3f-41b1-b0d6-e821cac2f8ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.690108 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-config\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.690204 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3c70e39-bf38-42a7-b579-ed17a163a5b1-tmp\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.690217 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-trusted-ca\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.692620 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.694728 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3c70e39-bf38-42a7-b579-ed17a163a5b1-serving-cert\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.711723 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.733097 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.752024 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.771888 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790075 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790268 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/333793e1-92de-4fbe-83b5-26b64848c6af-default-certificate\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790298 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790314 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790329 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790508 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790554 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36ec96a0-85cc-4757-ac20-cff015ffbe19-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790575 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d4pnb\" (UniqueName: \"kubernetes.io/projected/36ec96a0-85cc-4757-ac20-cff015ffbe19-kube-api-access-d4pnb\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790692 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d758cf9c-d67a-46df-a626-14e4a6a92be8-images\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790733 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d758cf9c-d67a-46df-a626-14e4a6a92be8-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790753 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/333793e1-92de-4fbe-83b5-26b64848c6af-stats-auth\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790771 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f4fe1ed8-46ec-4253-8371-144cad3c3573-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790793 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790809 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/36ec96a0-85cc-4757-ac20-cff015ffbe19-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790824 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/333793e1-92de-4fbe-83b5-26b64848c6af-service-ca-bundle\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790840 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-trusted-ca-bundle\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790860 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09592b3a-cb47-43ee-97e7-f058888af3ff-config\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790878 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/315dcf5a-c0ec-4778-9118-2f68422fcc17-etcd-client\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790904 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6169753a-a446-4d39-85c2-01422f667bde-tmpfs\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790922 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a1c5db5b-e8c1-4d79-aca9-10703c8e82db-signing-cabundle\") pod \"service-ca-74545575db-6jw94\" (UID: \"a1c5db5b-e8c1-4d79-aca9-10703c8e82db\") " pod="openshift-service-ca/service-ca-74545575db-6jw94" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790942 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b88913b0-a37a-46b4-9c43-4e2e22f306d5-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nwbtj\" (UID: \"b88913b0-a37a-46b4-9c43-4e2e22f306d5\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790960 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5jx77\" (UniqueName: \"kubernetes.io/projected/fab52538-cd8b-408e-b571-f2dc516dc2a3-kube-api-access-5jx77\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790975 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c82a029-666a-49b5-8c4c-e8956a23303a-serving-cert\") pod \"service-ca-operator-5b9c976747-vkvvq\" (UID: \"4c82a029-666a-49b5-8c4c-e8956a23303a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.790993 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-serving-cert\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791016 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-trusted-ca-bundle\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791037 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qhvsd\" (UniqueName: \"kubernetes.io/projected/d758cf9c-d67a-46df-a626-14e4a6a92be8-kube-api-access-qhvsd\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791053 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36ec96a0-85cc-4757-ac20-cff015ffbe19-tmp\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791071 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315dcf5a-c0ec-4778-9118-2f68422fcc17-config\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791089 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-console-config\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791107 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-service-ca\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791125 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-tmp-dir\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791145 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a1c5db5b-e8c1-4d79-aca9-10703c8e82db-signing-key\") pod \"service-ca-74545575db-6jw94\" (UID: \"a1c5db5b-e8c1-4d79-aca9-10703c8e82db\") " pod="openshift-service-ca/service-ca-74545575db-6jw94" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791174 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/36ec96a0-85cc-4757-ac20-cff015ffbe19-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791193 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-webhook-cert\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791210 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-dir\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791228 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9wvbg\" (UniqueName: \"kubernetes.io/projected/d91bd19e-1fee-475c-a8ff-ee1014086695-kube-api-access-9wvbg\") pod \"downloads-747b44746d-prvwt\" (UID: \"d91bd19e-1fee-475c-a8ff-ee1014086695\") " pod="openshift-console/downloads-747b44746d-prvwt" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791243 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/315dcf5a-c0ec-4778-9118-2f68422fcc17-tmp-dir\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791261 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-policies\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791278 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791297 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d758cf9c-d67a-46df-a626-14e4a6a92be8-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791317 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791333 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791353 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/315dcf5a-c0ec-4778-9118-2f68422fcc17-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791369 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-serving-cert\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791387 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cqdlz\" (UniqueName: \"kubernetes.io/projected/f4fe1ed8-46ec-4253-8371-144cad3c3573-kube-api-access-cqdlz\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791403 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-trusted-ca\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791413 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791434 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d758cf9c-d67a-46df-a626-14e4a6a92be8-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791421 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7q6ds\" (UniqueName: \"kubernetes.io/projected/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-kube-api-access-7q6ds\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791698 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36ec96a0-85cc-4757-ac20-cff015ffbe19-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791806 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-tmp-dir\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.791974 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792192 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-console-config\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: E0121 09:56:34.792352 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.292324349 +0000 UTC m=+110.960416027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792468 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-audit-dir\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792511 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-encryption-config\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792540 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fab52538-cd8b-408e-b571-f2dc516dc2a3-console-serving-cert\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792563 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f4fe1ed8-46ec-4253-8371-144cad3c3573-srv-cert\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792573 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36ec96a0-85cc-4757-ac20-cff015ffbe19-tmp\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792580 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-apiservice-cert\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792223 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/36ec96a0-85cc-4757-ac20-cff015ffbe19-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792760 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792790 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-kube-api-access\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792836 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/315dcf5a-c0ec-4778-9118-2f68422fcc17-etcd-ca\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792853 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-serving-cert\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792869 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4rkvv\" (UniqueName: \"kubernetes.io/projected/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-kube-api-access-4rkvv\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792885 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8n4\" (UniqueName: \"kubernetes.io/projected/637fb734-7cb1-46f5-a282-438e701620d5-kube-api-access-fl8n4\") pod \"migrator-866fcbc849-j7g7z\" (UID: \"637fb734-7cb1-46f5-a282-438e701620d5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792907 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792924 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36ec96a0-85cc-4757-ac20-cff015ffbe19-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792939 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-etcd-serving-ca\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792954 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-config\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.792970 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l68lw\" (UniqueName: \"kubernetes.io/projected/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-kube-api-access-l68lw\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.793249 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/333793e1-92de-4fbe-83b5-26b64848c6af-service-ca-bundle\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.793401 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315dcf5a-c0ec-4778-9118-2f68422fcc17-config\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.793717 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.794210 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.794218 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6169753a-a446-4d39-85c2-01422f667bde-tmpfs\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.794452 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/315dcf5a-c0ec-4778-9118-2f68422fcc17-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.795156 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.795310 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/315dcf5a-c0ec-4778-9118-2f68422fcc17-tmp-dir\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.795375 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-dir\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.795429 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-policies\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.795540 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/36ec96a0-85cc-4757-ac20-cff015ffbe19-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.795771 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7cqg\" (UniqueName: \"kubernetes.io/projected/cec067a0-6e27-4e3f-b03a-f37ffd10dd43-kube-api-access-l7cqg\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dc2k6\" (UID: \"cec067a0-6e27-4e3f-b03a-f37ffd10dd43\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.796074 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.796099 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-oauth-serving-cert\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.796126 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-audit-policies\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.796146 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q9pbz\" (UniqueName: \"kubernetes.io/projected/197acdb1-438c-41ba-8b8d-a78197486cd7-kube-api-access-q9pbz\") pod \"package-server-manager-77f986bd66-5b4c8\" (UID: \"197acdb1-438c-41ba-8b8d-a78197486cd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.796101 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/333793e1-92de-4fbe-83b5-26b64848c6af-default-certificate\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.796171 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8053129-cc10-477e-b44f-52c846d9d1ce-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.796211 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09592b3a-cb47-43ee-97e7-f058888af3ff-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.796230 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f4fe1ed8-46ec-4253-8371-144cad3c3573-tmpfs\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:34 crc kubenswrapper[5119]: E0121 09:56:34.796322 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.296304686 +0000 UTC m=+110.964396364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.796520 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-audit-dir\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.796528 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f4fe1ed8-46ec-4253-8371-144cad3c3573-tmpfs\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.797205 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-config\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.797332 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/197acdb1-438c-41ba-8b8d-a78197486cd7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-5b4c8\" (UID: \"197acdb1-438c-41ba-8b8d-a78197486cd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.797558 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4b2qp\" (UniqueName: \"kubernetes.io/projected/b88913b0-a37a-46b4-9c43-4e2e22f306d5-kube-api-access-4b2qp\") pod \"machine-config-controller-f9cdd68f7-nwbtj\" (UID: \"b88913b0-a37a-46b4-9c43-4e2e22f306d5\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.797709 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8053129-cc10-477e-b44f-52c846d9d1ce-config\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.797821 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fab52538-cd8b-408e-b571-f2dc516dc2a3-console-oauth-config\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.797921 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798018 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.797953 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fab52538-cd8b-408e-b571-f2dc516dc2a3-console-serving-cert\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.797833 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/333793e1-92de-4fbe-83b5-26b64848c6af-stats-auth\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.797489 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798105 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82cwv\" (UniqueName: \"kubernetes.io/projected/a1c5db5b-e8c1-4d79-aca9-10703c8e82db-kube-api-access-82cwv\") pod \"service-ca-74545575db-6jw94\" (UID: \"a1c5db5b-e8c1-4d79-aca9-10703c8e82db\") " pod="openshift-service-ca/service-ca-74545575db-6jw94" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798333 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-etcd-client\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798416 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/315dcf5a-c0ec-4778-9118-2f68422fcc17-serving-cert\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798494 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-chpll\" (UniqueName: \"kubernetes.io/projected/6169753a-a446-4d39-85c2-01422f667bde-kube-api-access-chpll\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798559 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fdjfz\" (UniqueName: \"kubernetes.io/projected/333793e1-92de-4fbe-83b5-26b64848c6af-kube-api-access-fdjfz\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798655 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09592b3a-cb47-43ee-97e7-f058888af3ff-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798759 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7mmb6\" (UniqueName: \"kubernetes.io/projected/984cb670-8e15-4092-bf42-f3c6337e1cad-kube-api-access-7mmb6\") pod \"kube-storage-version-migrator-operator-565b79b866-4rx9w\" (UID: \"984cb670-8e15-4092-bf42-f3c6337e1cad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798842 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8czzd\" (UniqueName: \"kubernetes.io/projected/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-kube-api-access-8czzd\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798920 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/984cb670-8e15-4092-bf42-f3c6337e1cad-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4rx9w\" (UID: \"984cb670-8e15-4092-bf42-f3c6337e1cad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798995 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/333793e1-92de-4fbe-83b5-26b64848c6af-metrics-certs\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.799065 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8053129-cc10-477e-b44f-52c846d9d1ce-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.799132 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b88913b0-a37a-46b4-9c43-4e2e22f306d5-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nwbtj\" (UID: \"b88913b0-a37a-46b4-9c43-4e2e22f306d5\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.798497 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/315dcf5a-c0ec-4778-9118-2f68422fcc17-etcd-ca\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.799257 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cec067a0-6e27-4e3f-b03a-f37ffd10dd43-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dc2k6\" (UID: \"cec067a0-6e27-4e3f-b03a-f37ffd10dd43\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.799331 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lhn8q\" (UniqueName: \"kubernetes.io/projected/4c82a029-666a-49b5-8c4c-e8956a23303a-kube-api-access-lhn8q\") pod \"service-ca-operator-5b9c976747-vkvvq\" (UID: \"4c82a029-666a-49b5-8c4c-e8956a23303a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.799431 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09592b3a-cb47-43ee-97e7-f058888af3ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.799547 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g92b5\" (UniqueName: \"kubernetes.io/projected/315dcf5a-c0ec-4778-9118-2f68422fcc17-kube-api-access-g92b5\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.799675 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a8053129-cc10-477e-b44f-52c846d9d1ce-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.799804 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b88913b0-a37a-46b4-9c43-4e2e22f306d5-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nwbtj\" (UID: \"b88913b0-a37a-46b4-9c43-4e2e22f306d5\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.799941 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.800064 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c82a029-666a-49b5-8c4c-e8956a23303a-config\") pod \"service-ca-operator-5b9c976747-vkvvq\" (UID: \"4c82a029-666a-49b5-8c4c-e8956a23303a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.800165 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/984cb670-8e15-4092-bf42-f3c6337e1cad-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4rx9w\" (UID: \"984cb670-8e15-4092-bf42-f3c6337e1cad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.800254 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.800341 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-config\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.799036 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/315dcf5a-c0ec-4778-9118-2f68422fcc17-etcd-client\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.801062 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a8053129-cc10-477e-b44f-52c846d9d1ce-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.802227 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/315dcf5a-c0ec-4778-9118-2f68422fcc17-serving-cert\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.802275 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fab52538-cd8b-408e-b571-f2dc516dc2a3-console-oauth-config\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.802334 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.803531 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-trusted-ca-bundle\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.811344 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.814914 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-service-ca\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.818149 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f4fe1ed8-46ec-4253-8371-144cad3c3573-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.833047 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.836850 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fab52538-cd8b-408e-b571-f2dc516dc2a3-oauth-serving-cert\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.852684 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.871552 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.875372 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe4596e-cffd-4e61-b095-455eea1ed712-config\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.875673 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.875922 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65fa8c5a-91c4-411a-9586-2f893dfda634-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.876189 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65fa8c5a-91c4-411a-9586-2f893dfda634-config\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.876198 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09592b3a-cb47-43ee-97e7-f058888af3ff-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.876240 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/333793e1-92de-4fbe-83b5-26b64848c6af-metrics-certs\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.876641 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.878910 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65fa8c5a-91c4-411a-9586-2f893dfda634-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.879979 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.881187 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.882595 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.883394 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65fa8c5a-91c4-411a-9586-2f893dfda634-serving-cert\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.892258 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.897229 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-serving-cert\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.901024 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-6kqm2"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.901073 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-94gcl"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.901087 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-gngm4"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.901102 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z67hs"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.901432 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.901657 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:34 crc kubenswrapper[5119]: E0121 09:56:34.901887 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.401850726 +0000 UTC m=+111.069942404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.902342 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:34 crc kubenswrapper[5119]: E0121 09:56:34.902659 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.402644948 +0000 UTC m=+111.070736626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.912436 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.939366 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.949632 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-trusted-ca\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.951663 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.958096 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-config\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.972736 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.974171 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.974239 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp"] Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.974859 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:34 crc kubenswrapper[5119]: I0121 09:56:34.992906 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.003090 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.003260 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.503226674 +0000 UTC m=+111.171318392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.003802 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.004111 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.504097028 +0000 UTC m=+111.172188706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.012584 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.022851 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cec067a0-6e27-4e3f-b03a-f37ffd10dd43-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dc2k6\" (UID: \"cec067a0-6e27-4e3f-b03a-f37ffd10dd43\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.031649 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.052195 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.072494 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.082206 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c82a029-666a-49b5-8c4c-e8956a23303a-config\") pod \"service-ca-operator-5b9c976747-vkvvq\" (UID: \"4c82a029-666a-49b5-8c4c-e8956a23303a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.092011 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.105985 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.106172 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.606153475 +0000 UTC m=+111.274245153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.106409 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.106679 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.606663258 +0000 UTC m=+111.274754926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.111413 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.115132 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c82a029-666a-49b5-8c4c-e8956a23303a-serving-cert\") pod \"service-ca-operator-5b9c976747-vkvvq\" (UID: \"4c82a029-666a-49b5-8c4c-e8956a23303a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.115450 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-9jrsz"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.115812 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.132797 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.151380 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.171283 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.192477 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.208067 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.208208 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.708177972 +0000 UTC m=+111.376269680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.208720 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.209020 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.709003394 +0000 UTC m=+111.377095112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.211974 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.231901 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.251311 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.255433 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a1c5db5b-e8c1-4d79-aca9-10703c8e82db-signing-key\") pod \"service-ca-74545575db-6jw94\" (UID: \"a1c5db5b-e8c1-4d79-aca9-10703c8e82db\") " pod="openshift-service-ca/service-ca-74545575db-6jw94" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.272266 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-7855f"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.272524 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.272800 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.285275 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a1c5db5b-e8c1-4d79-aca9-10703c8e82db-signing-cabundle\") pod \"service-ca-74545575db-6jw94\" (UID: \"a1c5db5b-e8c1-4d79-aca9-10703c8e82db\") " pod="openshift-service-ca/service-ca-74545575db-6jw94" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.296866 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.310146 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.310374 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.810358132 +0000 UTC m=+111.478449810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.310745 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.310923 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.311482 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.311646 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.312111 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.312458 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.812438858 +0000 UTC m=+111.480530566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.314546 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.316051 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.319769 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.320672 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09592b3a-cb47-43ee-97e7-f058888af3ff-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.320841 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wrb86"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.320872 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qpbqz"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.321078 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.326038 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8224m"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.326194 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.329126 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-dpf2h"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.329153 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.329167 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-gjxgc"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.330011 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8224m" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.331881 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.331903 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.331917 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-jvrpf"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.331929 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.331940 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-6jw94"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.331952 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.331961 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.331972 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-8nd58"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.331981 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.331993 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bc4nv"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.332007 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.332904 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gjxgc" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.338425 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09592b3a-cb47-43ee-97e7-f058888af3ff-config\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.339459 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.339526 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.339556 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.339579 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.339639 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.339682 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.339709 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.339761 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-2glqc"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.339789 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.339812 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.339840 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-76tl2"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.341101 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346811 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7tls5"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346848 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-9jrsz"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346860 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346873 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346884 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-76tl2"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346900 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346908 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z67hs"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346917 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346926 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-7855f"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346935 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-prvwt"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346945 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8224m"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.346956 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qpbqz"] Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.347010 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-76tl2" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.353213 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.372502 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.392034 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.395049 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.412451 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.412832 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.412973 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs\") pod \"network-metrics-daemon-fk2f6\" (UID: \"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\") " pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.413156 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:35.913124268 +0000 UTC m=+111.581215986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.413303 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.417543 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.419149 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-serving-cert\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.419684 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e481d9e-6dd0-4c5e-bb9a-33546cb7715d-metrics-certs\") pod \"network-metrics-daemon-fk2f6\" (UID: \"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d\") " pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.432069 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.436048 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.443505 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-config\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.454379 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.472471 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.492487 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.507806 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8053129-cc10-477e-b44f-52c846d9d1ce-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.512027 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.515212 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.515559 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.015546365 +0000 UTC m=+111.683638043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.519199 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8053129-cc10-477e-b44f-52c846d9d1ce-config\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.532534 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.552526 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.570518 5119 request.go:752] "Waited before sending request" delay="1.003151836s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.573283 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: W0121 09:56:35.574913 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-04707b20cdb3625d732a070a216bafea9e64f577ced26208b27170ed4c81b82f WatchSource:0}: Error finding container 04707b20cdb3625d732a070a216bafea9e64f577ced26208b27170ed4c81b82f: Status 404 returned error can't find the container with id 04707b20cdb3625d732a070a216bafea9e64f577ced26208b27170ed4c81b82f Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.596886 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.598321 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-etcd-serving-ca\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.612539 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.616359 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.616547 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.116521093 +0000 UTC m=+111.784612801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.617707 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.618163 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.118145367 +0000 UTC m=+111.786237085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.624853 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-etcd-client\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.631996 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 21 09:56:35 crc kubenswrapper[5119]: W0121 09:56:35.635345 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-ce8f5a0eadd307377012e4b2cff65c0370ffe460a70650cbcc55f12ba331ea08 WatchSource:0}: Error finding container ce8f5a0eadd307377012e4b2cff65c0370ffe460a70650cbcc55f12ba331ea08: Status 404 returned error can't find the container with id ce8f5a0eadd307377012e4b2cff65c0370ffe460a70650cbcc55f12ba331ea08 Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.640865 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-serving-cert\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.652567 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.670092 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-encryption-config\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.671932 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.692029 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.695732 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-trusted-ca-bundle\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.708586 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fk2f6" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.716326 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.720987 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.721921 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.221852918 +0000 UTC m=+111.889944596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.731541 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.752400 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.757015 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-audit-policies\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.771777 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.791707 5119 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.791810 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d758cf9c-d67a-46df-a626-14e4a6a92be8-images podName:d758cf9c-d67a-46df-a626-14e4a6a92be8 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.291790037 +0000 UTC m=+111.959881715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/d758cf9c-d67a-46df-a626-14e4a6a92be8-images") pod "machine-config-operator-67c9d58cbb-mgn7f" (UID: "d758cf9c-d67a-46df-a626-14e4a6a92be8") : failed to sync configmap cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.792554 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.793532 5119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.793645 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-apiservice-cert podName:6169753a-a446-4d39-85c2-01422f667bde nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.293619946 +0000 UTC m=+111.961711624 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-apiservice-cert") pod "packageserver-7d4fc7d867-fs8fn" (UID: "6169753a-a446-4d39-85c2-01422f667bde") : failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.794150 5119 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.794210 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d758cf9c-d67a-46df-a626-14e4a6a92be8-proxy-tls podName:d758cf9c-d67a-46df-a626-14e4a6a92be8 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.294197081 +0000 UTC m=+111.962288849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d758cf9c-d67a-46df-a626-14e4a6a92be8-proxy-tls") pod "machine-config-operator-67c9d58cbb-mgn7f" (UID: "d758cf9c-d67a-46df-a626-14e4a6a92be8") : failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.796129 5119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.796172 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-webhook-cert podName:6169753a-a446-4d39-85c2-01422f667bde nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.296163713 +0000 UTC m=+111.964255391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-webhook-cert") pod "packageserver-7d4fc7d867-fs8fn" (UID: "6169753a-a446-4d39-85c2-01422f667bde") : failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.798247 5119 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.799167 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-config podName:7d4bb4e5-bb28-41fd-8095-0392fd6b8afb nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.299155284 +0000 UTC m=+111.967246962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-config") pod "openshift-controller-manager-operator-686468bdd5-h4bjc" (UID: "7d4bb4e5-bb28-41fd-8095-0392fd6b8afb") : failed to sync configmap cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.798284 5119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.798320 5119 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.799283 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-serving-cert podName:7d4bb4e5-bb28-41fd-8095-0392fd6b8afb nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.299257227 +0000 UTC m=+111.967348905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-serving-cert") pod "openshift-controller-manager-operator-686468bdd5-h4bjc" (UID: "7d4bb4e5-bb28-41fd-8095-0392fd6b8afb") : failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.799305 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197acdb1-438c-41ba-8b8d-a78197486cd7-package-server-manager-serving-cert podName:197acdb1-438c-41ba-8b8d-a78197486cd7 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.299297508 +0000 UTC m=+111.967389186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/197acdb1-438c-41ba-8b8d-a78197486cd7-package-server-manager-serving-cert") pod "package-server-manager-77f986bd66-5b4c8" (UID: "197acdb1-438c-41ba-8b8d-a78197486cd7") : failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.799333 5119 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.799375 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/984cb670-8e15-4092-bf42-f3c6337e1cad-config podName:984cb670-8e15-4092-bf42-f3c6337e1cad nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.29936415 +0000 UTC m=+111.967455828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/984cb670-8e15-4092-bf42-f3c6337e1cad-config") pod "kube-storage-version-migrator-operator-565b79b866-4rx9w" (UID: "984cb670-8e15-4092-bf42-f3c6337e1cad") : failed to sync configmap cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.800074 5119 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.800175 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88913b0-a37a-46b4-9c43-4e2e22f306d5-proxy-tls podName:b88913b0-a37a-46b4-9c43-4e2e22f306d5 nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.30015422 +0000 UTC m=+111.968245898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/b88913b0-a37a-46b4-9c43-4e2e22f306d5-proxy-tls") pod "machine-config-controller-f9cdd68f7-nwbtj" (UID: "b88913b0-a37a-46b4-9c43-4e2e22f306d5") : failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.800898 5119 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.800949 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/984cb670-8e15-4092-bf42-f3c6337e1cad-serving-cert podName:984cb670-8e15-4092-bf42-f3c6337e1cad nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.300938551 +0000 UTC m=+111.969030229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/984cb670-8e15-4092-bf42-f3c6337e1cad-serving-cert") pod "kube-storage-version-migrator-operator-565b79b866-4rx9w" (UID: "984cb670-8e15-4092-bf42-f3c6337e1cad") : failed to sync secret cache: timed out waiting for the condition Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.809586 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f4fe1ed8-46ec-4253-8371-144cad3c3573-srv-cert\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.811849 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.822977 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.823802 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.32374139 +0000 UTC m=+111.991833078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.831797 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.851288 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.872436 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.882578 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fk2f6"] Jan 21 09:56:35 crc kubenswrapper[5119]: W0121 09:56:35.889923 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e481d9e_6dd0_4c5e_bb9a_33546cb7715d.slice/crio-c8dfa7d92fb5c4192e639a3f2c25645b048c653ad7bf83b3fe72582c7a8497eb WatchSource:0}: Error finding container c8dfa7d92fb5c4192e639a3f2c25645b048c653ad7bf83b3fe72582c7a8497eb: Status 404 returned error can't find the container with id c8dfa7d92fb5c4192e639a3f2c25645b048c653ad7bf83b3fe72582c7a8497eb Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.892301 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.912843 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.923926 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.924148 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.424118863 +0000 UTC m=+112.092210551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.924510 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:35 crc kubenswrapper[5119]: E0121 09:56:35.925211 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.425198372 +0000 UTC m=+112.093290040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.932000 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.952779 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.972101 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 21 09:56:35 crc kubenswrapper[5119]: I0121 09:56:35.992161 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.012271 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.026431 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.026582 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.526558621 +0000 UTC m=+112.194650299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.027061 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.027508 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.527490915 +0000 UTC m=+112.195582603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.033219 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.045603 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"5bf23783fcca21f39d973769d8513c2affbf2fca3065da34d534297bb43d33d0"} Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.045723 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"ce8f5a0eadd307377012e4b2cff65c0370ffe460a70650cbcc55f12ba331ea08"} Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.045985 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.046623 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"c4c0fd22bb18e538a7dc8b8414e0e651bc63719c98de8767bce3629f47d11b6c"} Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.046653 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"4df743c395dbf522f9d861114efe0968efe6d43a17f96cfc659926ff77b432f8"} Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.047548 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"feb393be46dd81e69943edde4a9e13545c11710acc9030f902ed6a60809127e4"} Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.047597 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"04707b20cdb3625d732a070a216bafea9e64f577ced26208b27170ed4c81b82f"} Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.048211 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fk2f6" event={"ID":"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d","Type":"ContainerStarted","Data":"c8dfa7d92fb5c4192e639a3f2c25645b048c653ad7bf83b3fe72582c7a8497eb"} Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.052554 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.071713 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.091767 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.111369 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.128998 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.129133 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.6291147 +0000 UTC m=+112.297206378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.132329 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.169701 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqwbp\" (UniqueName: \"kubernetes.io/projected/9202c0b0-32fd-49a9-85ce-98c79744bfcf-kube-api-access-wqwbp\") pod \"openshift-config-operator-5777786469-dpf2h\" (UID: \"9202c0b0-32fd-49a9-85ce-98c79744bfcf\") " pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.185570 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvrbm\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-kube-api-access-lvrbm\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.205453 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmpxk\" (UniqueName: \"kubernetes.io/projected/1ccf6a04-2820-4b99-9dbd-2e6d111b4fed-kube-api-access-pmpxk\") pod \"machine-api-operator-755bb95488-6kqm2\" (UID: \"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed\") " pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.226919 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fhws\" (UniqueName: \"kubernetes.io/projected/19471fb3-19b2-42d4-967e-6b0620f686ce-kube-api-access-4fhws\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.239273 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.239596 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.739584233 +0000 UTC m=+112.407675911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.248775 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d762n\" (UniqueName: \"kubernetes.io/projected/986b7816-8325-48f6-b5a5-2d51c9f31687-kube-api-access-d762n\") pod \"apiserver-9ddfb9f55-wrb86\" (UID: \"986b7816-8325-48f6-b5a5-2d51c9f31687\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.265437 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-bound-sa-token\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.275906 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.284334 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9pql\" (UniqueName: \"kubernetes.io/projected/49421ea1-6e3f-41b1-b0d6-e821cac2f8ab-kube-api-access-n9pql\") pod \"openshift-apiserver-operator-846cbfc458-pxxr9\" (UID: \"49421ea1-6e3f-41b1-b0d6-e821cac2f8ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.305687 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8qvc\" (UniqueName: \"kubernetes.io/projected/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-kube-api-access-s8qvc\") pod \"collect-profiles-29483145-q8nfm\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.331868 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fzp5\" (UniqueName: \"kubernetes.io/projected/bfe4596e-cffd-4e61-b095-455eea1ed712-kube-api-access-5fzp5\") pod \"machine-approver-54c688565-4dmmv\" (UID: \"bfe4596e-cffd-4e61-b095-455eea1ed712\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.340647 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.340761 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/984cb670-8e15-4092-bf42-f3c6337e1cad-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4rx9w\" (UID: \"984cb670-8e15-4092-bf42-f3c6337e1cad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.340792 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b88913b0-a37a-46b4-9c43-4e2e22f306d5-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nwbtj\" (UID: \"b88913b0-a37a-46b4-9c43-4e2e22f306d5\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.340813 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/984cb670-8e15-4092-bf42-f3c6337e1cad-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4rx9w\" (UID: \"984cb670-8e15-4092-bf42-f3c6337e1cad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.340842 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d758cf9c-d67a-46df-a626-14e4a6a92be8-images\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.340898 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-webhook-cert\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.340919 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d758cf9c-d67a-46df-a626-14e4a6a92be8-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.340947 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-apiservice-cert\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.340985 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-config\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.341001 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/197acdb1-438c-41ba-8b8d-a78197486cd7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-5b4c8\" (UID: \"197acdb1-438c-41ba-8b8d-a78197486cd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.341025 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.346927 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d758cf9c-d67a-46df-a626-14e4a6a92be8-images\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.347031 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/984cb670-8e15-4092-bf42-f3c6337e1cad-config\") pod \"kube-storage-version-migrator-operator-565b79b866-4rx9w\" (UID: \"984cb670-8e15-4092-bf42-f3c6337e1cad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.347093 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.847075765 +0000 UTC m=+112.515167443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.348193 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-config\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.348464 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.349453 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d758cf9c-d67a-46df-a626-14e4a6a92be8-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.349478 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-apiservice-cert\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.349489 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvrkq\" (UniqueName: \"kubernetes.io/projected/d3c70e39-bf38-42a7-b579-ed17a163a5b1-kube-api-access-xvrkq\") pod \"route-controller-manager-776cdc94d6-vgx98\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.350384 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/984cb670-8e15-4092-bf42-f3c6337e1cad-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-4rx9w\" (UID: \"984cb670-8e15-4092-bf42-f3c6337e1cad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.350524 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6169753a-a446-4d39-85c2-01422f667bde-webhook-cert\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.350543 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/197acdb1-438c-41ba-8b8d-a78197486cd7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-5b4c8\" (UID: \"197acdb1-438c-41ba-8b8d-a78197486cd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.351322 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b88913b0-a37a-46b4-9c43-4e2e22f306d5-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nwbtj\" (UID: \"b88913b0-a37a-46b4-9c43-4e2e22f306d5\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.361781 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.365323 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-clhbh\" (UniqueName: \"kubernetes.io/projected/ee0294ff-f61f-492b-b738-fbbee8f757eb-kube-api-access-clhbh\") pod \"controller-manager-65b6cccf98-gngm4\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.402891 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.403211 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.403474 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.403570 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.424983 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.425477 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19471fb3-19b2-42d4-967e-6b0620f686ce-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-x6xfs\" (UID: \"19471fb3-19b2-42d4-967e-6b0620f686ce\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.427251 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkmpg\" (UniqueName: \"kubernetes.io/projected/65fa8c5a-91c4-411a-9586-2f893dfda634-kube-api-access-hkmpg\") pod \"authentication-operator-7f5c659b84-6z5rg\" (UID: \"65fa8c5a-91c4-411a-9586-2f893dfda634\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.434250 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.442622 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.443127 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:36.943114381 +0000 UTC m=+112.611206059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.449476 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4pnb\" (UniqueName: \"kubernetes.io/projected/36ec96a0-85cc-4757-ac20-cff015ffbe19-kube-api-access-d4pnb\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.469986 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q6ds\" (UniqueName: \"kubernetes.io/projected/7d4bb4e5-bb28-41fd-8095-0392fd6b8afb-kube-api-access-7q6ds\") pod \"openshift-controller-manager-operator-686468bdd5-h4bjc\" (UID: \"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.492119 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhvsd\" (UniqueName: \"kubernetes.io/projected/d758cf9c-d67a-46df-a626-14e4a6a92be8-kube-api-access-qhvsd\") pod \"machine-config-operator-67c9d58cbb-mgn7f\" (UID: \"d758cf9c-d67a-46df-a626-14e4a6a92be8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.524936 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jx77\" (UniqueName: \"kubernetes.io/projected/fab52538-cd8b-408e-b571-f2dc516dc2a3-kube-api-access-5jx77\") pod \"console-64d44f6ddf-8nd58\" (UID: \"fab52538-cd8b-408e-b571-f2dc516dc2a3\") " pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.525175 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-6kqm2"] Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.539153 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqdlz\" (UniqueName: \"kubernetes.io/projected/f4fe1ed8-46ec-4253-8371-144cad3c3573-kube-api-access-cqdlz\") pod \"catalog-operator-75ff9f647d-rjq8j\" (UID: \"f4fe1ed8-46ec-4253-8371-144cad3c3573\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.545849 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.546020 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.0459949 +0000 UTC m=+112.714086568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.546401 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.549073 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.049059612 +0000 UTC m=+112.717151290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.554773 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wvbg\" (UniqueName: \"kubernetes.io/projected/d91bd19e-1fee-475c-a8ff-ee1014086695-kube-api-access-9wvbg\") pod \"downloads-747b44746d-prvwt\" (UID: \"d91bd19e-1fee-475c-a8ff-ee1014086695\") " pod="openshift-console/downloads-747b44746d-prvwt" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.560527 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98"] Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.572254 5119 request.go:752] "Waited before sending request" delay="1.776038939s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.575504 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.576818 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7cqg\" (UniqueName: \"kubernetes.io/projected/cec067a0-6e27-4e3f-b03a-f37ffd10dd43-kube-api-access-l7cqg\") pod \"control-plane-machine-set-operator-75ffdb6fcd-dc2k6\" (UID: \"cec067a0-6e27-4e3f-b03a-f37ffd10dd43\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.589992 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.590282 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9pbz\" (UniqueName: \"kubernetes.io/projected/197acdb1-438c-41ba-8b8d-a78197486cd7-kube-api-access-q9pbz\") pod \"package-server-manager-77f986bd66-5b4c8\" (UID: \"197acdb1-438c-41ba-8b8d-a78197486cd7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" Jan 21 09:56:36 crc kubenswrapper[5119]: W0121 09:56:36.598450 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3c70e39_bf38_42a7_b579_ed17a163a5b1.slice/crio-0b01e0d2386874f8e8037c51db2faf56cb3cf6f009eb38c5398ae4339dd6f1f6 WatchSource:0}: Error finding container 0b01e0d2386874f8e8037c51db2faf56cb3cf6f009eb38c5398ae4339dd6f1f6: Status 404 returned error can't find the container with id 0b01e0d2386874f8e8037c51db2faf56cb3cf6f009eb38c5398ae4339dd6f1f6 Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.606726 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8053129-cc10-477e-b44f-52c846d9d1ce-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-v7d7l\" (UID: \"a8053129-cc10-477e-b44f-52c846d9d1ce\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.622728 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.627291 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d29fd5f-ecd0-4624-97b8-5f2d50b70df0-kube-api-access\") pod \"kube-apiserver-operator-575994946d-lpk4t\" (UID: \"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.628112 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.647239 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rkvv\" (UniqueName: \"kubernetes.io/projected/47512efb-ea0a-42ac-a2c6-fd3017df0ce1-kube-api-access-4rkvv\") pod \"apiserver-8596bd845d-2glqc\" (UID: \"47512efb-ea0a-42ac-a2c6-fd3017df0ce1\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.647685 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.647975 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.147961966 +0000 UTC m=+112.816053644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.679924 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8n4\" (UniqueName: \"kubernetes.io/projected/637fb734-7cb1-46f5-a282-438e701620d5-kube-api-access-fl8n4\") pod \"migrator-866fcbc849-j7g7z\" (UID: \"637fb734-7cb1-46f5-a282-438e701620d5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.690684 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36ec96a0-85cc-4757-ac20-cff015ffbe19-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-bkq66\" (UID: \"36ec96a0-85cc-4757-ac20-cff015ffbe19\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.705461 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.706830 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l68lw\" (UniqueName: \"kubernetes.io/projected/f4f6fb51-60b9-4dcd-b79a-ebe933c83555-kube-api-access-l68lw\") pod \"console-operator-67c89758df-jvrpf\" (UID: \"f4f6fb51-60b9-4dcd-b79a-ebe933c83555\") " pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.712383 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.727480 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b2qp\" (UniqueName: \"kubernetes.io/projected/b88913b0-a37a-46b4-9c43-4e2e22f306d5-kube-api-access-4b2qp\") pod \"machine-config-controller-f9cdd68f7-nwbtj\" (UID: \"b88913b0-a37a-46b4-9c43-4e2e22f306d5\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.744952 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.749008 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.749307 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.249292203 +0000 UTC m=+112.917383881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.750618 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82cwv\" (UniqueName: \"kubernetes.io/projected/a1c5db5b-e8c1-4d79-aca9-10703c8e82db-kube-api-access-82cwv\") pod \"service-ca-74545575db-6jw94\" (UID: \"a1c5db5b-e8c1-4d79-aca9-10703c8e82db\") " pod="openshift-service-ca/service-ca-74545575db-6jw94" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.764820 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.769233 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-chpll\" (UniqueName: \"kubernetes.io/projected/6169753a-a446-4d39-85c2-01422f667bde-kube-api-access-chpll\") pod \"packageserver-7d4fc7d867-fs8fn\" (UID: \"6169753a-a446-4d39-85c2-01422f667bde\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.773019 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.781813 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.794513 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdjfz\" (UniqueName: \"kubernetes.io/projected/333793e1-92de-4fbe-83b5-26b64848c6af-kube-api-access-fdjfz\") pod \"router-default-68cf44c8b8-cn8mz\" (UID: \"333793e1-92de-4fbe-83b5-26b64848c6af\") " pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.808163 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mmb6\" (UniqueName: \"kubernetes.io/projected/984cb670-8e15-4092-bf42-f3c6337e1cad-kube-api-access-7mmb6\") pod \"kube-storage-version-migrator-operator-565b79b866-4rx9w\" (UID: \"984cb670-8e15-4092-bf42-f3c6337e1cad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.829302 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8czzd\" (UniqueName: \"kubernetes.io/projected/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-kube-api-access-8czzd\") pod \"oauth-openshift-66458b6674-7tls5\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.829489 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-prvwt" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.844450 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.849151 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhn8q\" (UniqueName: \"kubernetes.io/projected/4c82a029-666a-49b5-8c4c-e8956a23303a-kube-api-access-lhn8q\") pod \"service-ca-operator-5b9c976747-vkvvq\" (UID: \"4c82a029-666a-49b5-8c4c-e8956a23303a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.849202 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j"] Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.849545 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.850229 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.35021134 +0000 UTC m=+113.018303018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.850329 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.850553 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.350546599 +0000 UTC m=+113.018638277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.850814 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.867916 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.869919 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-dpf2h"] Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.886422 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09592b3a-cb47-43ee-97e7-f058888af3ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-hdhzp\" (UID: \"09592b3a-cb47-43ee-97e7-f058888af3ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.900220 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.911847 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g92b5\" (UniqueName: \"kubernetes.io/projected/315dcf5a-c0ec-4778-9118-2f68422fcc17-kube-api-access-g92b5\") pod \"etcd-operator-69b85846b6-7nw9s\" (UID: \"315dcf5a-c0ec-4778-9118-2f68422fcc17\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.924774 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.932362 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.933759 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.934146 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.950999 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:36 crc kubenswrapper[5119]: E0121 09:56:36.951197 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.451181268 +0000 UTC m=+113.119272946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.953891 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.960189 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9"] Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.972571 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.975522 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.975756 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.976047 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-6jw94" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.988254 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm"] Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.990028 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wrb86"] Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.992033 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.992585 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-gngm4"] Jan 21 09:56:36 crc kubenswrapper[5119]: I0121 09:56:36.995831 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f"] Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.013087 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 21 09:56:37 crc kubenswrapper[5119]: W0121 09:56:37.018100 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49421ea1_6e3f_41b1_b0d6_e821cac2f8ab.slice/crio-3f3f97cfc90ae6c862f99a58318cdde1043e54e9c0e19d44fc5910626a9f27e1 WatchSource:0}: Error finding container 3f3f97cfc90ae6c862f99a58318cdde1043e54e9c0e19d44fc5910626a9f27e1: Status 404 returned error can't find the container with id 3f3f97cfc90ae6c862f99a58318cdde1043e54e9c0e19d44fc5910626a9f27e1 Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.035564 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.043685 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.050849 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.051573 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.052269 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:37 crc kubenswrapper[5119]: E0121 09:56:37.052516 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.552502845 +0000 UTC m=+113.220594513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.052541 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg"] Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.058467 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:37 crc kubenswrapper[5119]: W0121 09:56:37.063282 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf53b6ab7_e57d_4f85_adef_9a60515f8f1f.slice/crio-304cfd66cc9dd19e059433737eac2672837212cf9dc77dde1d44c8e2afc7e3a8 WatchSource:0}: Error finding container 304cfd66cc9dd19e059433737eac2672837212cf9dc77dde1d44c8e2afc7e3a8: Status 404 returned error can't find the container with id 304cfd66cc9dd19e059433737eac2672837212cf9dc77dde1d44c8e2afc7e3a8 Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.063658 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" event={"ID":"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed","Type":"ContainerStarted","Data":"5d7fb0caee47e14d13abe918f17eb6c0534a2760e988104969d1bfc70608742b"} Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.063689 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" event={"ID":"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed","Type":"ContainerStarted","Data":"c6a970e9e025c61e32ba53a4c9a26d8d07837cdb9a91ce900b7344b774fd18ac"} Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.071895 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.079067 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fk2f6" event={"ID":"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d","Type":"ContainerStarted","Data":"b4e92215923cc0f7858ff18a236a2491b13debde9dd6f2e274bdd1b3be7bb0df"} Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.079102 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fk2f6" event={"ID":"0e481d9e-6dd0-4c5e-bb9a-33546cb7715d","Type":"ContainerStarted","Data":"510b56329e1c8387e8bf4bee537c6397a765d89941732ad52e8e77f9117d3ca0"} Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.080244 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" event={"ID":"f4fe1ed8-46ec-4253-8371-144cad3c3573","Type":"ContainerStarted","Data":"ac29835e599e3fa7f4f3f68f81b82a3fde0d3660690d3930fe5cf9907c495d3e"} Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.081172 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" event={"ID":"49421ea1-6e3f-41b1-b0d6-e821cac2f8ab","Type":"ContainerStarted","Data":"3f3f97cfc90ae6c862f99a58318cdde1043e54e9c0e19d44fc5910626a9f27e1"} Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.081771 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" event={"ID":"9202c0b0-32fd-49a9-85ce-98c79744bfcf","Type":"ContainerStarted","Data":"4275bc8b7fbcadec84b5ac7e759f2d130cbad385530c7532836429caecc62d81"} Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.082522 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" event={"ID":"d3c70e39-bf38-42a7-b579-ed17a163a5b1","Type":"ContainerStarted","Data":"a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b"} Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.082540 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" event={"ID":"d3c70e39-bf38-42a7-b579-ed17a163a5b1","Type":"ContainerStarted","Data":"0b01e0d2386874f8e8037c51db2faf56cb3cf6f009eb38c5398ae4339dd6f1f6"} Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.082928 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.091848 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.091989 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" event={"ID":"bfe4596e-cffd-4e61-b095-455eea1ed712","Type":"ContainerStarted","Data":"495c7124b8d18e06288ce1c75dda856ccff680d532a27e6416c62466ef49ceac"} Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.094010 5119 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-vgx98 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.094075 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" podUID="d3c70e39-bf38-42a7-b579-ed17a163a5b1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.111877 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.115568 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.125273 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.132350 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.153065 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:37 crc kubenswrapper[5119]: E0121 09:56:37.153406 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.653387451 +0000 UTC m=+113.321479139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.153683 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.173338 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.202871 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.213116 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.233383 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.254122 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.254801 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:37 crc kubenswrapper[5119]: E0121 09:56:37.255132 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.75511948 +0000 UTC m=+113.423211158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.273665 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.292333 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.312500 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.332826 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.334019 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc"] Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.352849 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.355710 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:37 crc kubenswrapper[5119]: E0121 09:56:37.356104 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.856086677 +0000 UTC m=+113.524178355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.373964 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.394003 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.413699 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.431775 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.452756 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.456997 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:37 crc kubenswrapper[5119]: E0121 09:56:37.457341 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:37.957324993 +0000 UTC m=+113.625416671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.471825 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.557934 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:37 crc kubenswrapper[5119]: E0121 09:56:37.559942 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:38.059919615 +0000 UTC m=+113.728011283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661248 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c82nr\" (UniqueName: \"kubernetes.io/projected/0e7b694c-1e5a-4209-9baf-67cd7bc2f3af-kube-api-access-c82nr\") pod \"cluster-samples-operator-6b564684c8-g4flp\" (UID: \"0e7b694c-1e5a-4209-9baf-67cd7bc2f3af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661306 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j8zg\" (UniqueName: \"kubernetes.io/projected/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-kube-api-access-6j8zg\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661322 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572s8\" (UniqueName: \"kubernetes.io/projected/a1df92f8-e439-4da3-af25-cdbf8374d2da-kube-api-access-572s8\") pod \"machine-config-server-gjxgc\" (UID: \"a1df92f8-e439-4da3-af25-cdbf8374d2da\") " pod="openshift-machine-config-operator/machine-config-server-gjxgc" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661341 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-config-volume\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661395 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0db2f4be-492e-40be-84b1-3578f55c1efb-srv-cert\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661409 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6xww\" (UniqueName: \"kubernetes.io/projected/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-kube-api-access-p6xww\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661472 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-registration-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661491 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292jh\" (UniqueName: \"kubernetes.io/projected/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-kube-api-access-292jh\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661510 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661528 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngmkb\" (UniqueName: \"kubernetes.io/projected/1b25e062-a07b-4350-84c9-9247d3a0c144-kube-api-access-ngmkb\") pod \"multus-admission-controller-69db94689b-9jrsz\" (UID: \"1b25e062-a07b-4350-84c9-9247d3a0c144\") " pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661545 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661563 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b25e062-a07b-4350-84c9-9247d3a0c144-webhook-certs\") pod \"multus-admission-controller-69db94689b-9jrsz\" (UID: \"1b25e062-a07b-4350-84c9-9247d3a0c144\") " pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661604 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661657 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0db2f4be-492e-40be-84b1-3578f55c1efb-profile-collector-cert\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661680 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-tmp-dir\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661717 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-socket-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661741 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-mountpoint-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661769 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/303b471c-5851-4624-a1c6-5d8f826641b1-tmp-dir\") pod \"dns-operator-799b87ffcd-7855f\" (UID: \"303b471c-5851-4624-a1c6-5d8f826641b1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661792 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkzgs\" (UniqueName: \"kubernetes.io/projected/303b471c-5851-4624-a1c6-5d8f826641b1-kube-api-access-dkzgs\") pod \"dns-operator-799b87ffcd-7855f\" (UID: \"303b471c-5851-4624-a1c6-5d8f826641b1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661807 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-ready\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661826 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661942 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-metrics-tls\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661973 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-plugins-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.661993 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a1df92f8-e439-4da3-af25-cdbf8374d2da-certs\") pod \"machine-config-server-gjxgc\" (UID: \"a1df92f8-e439-4da3-af25-cdbf8374d2da\") " pod="openshift-machine-config-operator/machine-config-server-gjxgc" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.662010 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8369fc8-80db-4a4f-9928-46a8acdb2128-cert\") pod \"ingress-canary-76tl2\" (UID: \"e8369fc8-80db-4a4f-9928-46a8acdb2128\") " pod="openshift-ingress-canary/ingress-canary-76tl2" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.662029 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ss7l\" (UniqueName: \"kubernetes.io/projected/e8369fc8-80db-4a4f-9928-46a8acdb2128-kube-api-access-2ss7l\") pod \"ingress-canary-76tl2\" (UID: \"e8369fc8-80db-4a4f-9928-46a8acdb2128\") " pod="openshift-ingress-canary/ingress-canary-76tl2" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.662060 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e7b694c-1e5a-4209-9baf-67cd7bc2f3af-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-g4flp\" (UID: \"0e7b694c-1e5a-4209-9baf-67cd7bc2f3af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.662099 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-csi-data-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.662115 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb9kz\" (UniqueName: \"kubernetes.io/projected/7cdaf693-5dea-4260-bb45-209fcd54b53e-kube-api-access-rb9kz\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.662133 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a1df92f8-e439-4da3-af25-cdbf8374d2da-node-bootstrap-token\") pod \"machine-config-server-gjxgc\" (UID: \"a1df92f8-e439-4da3-af25-cdbf8374d2da\") " pod="openshift-machine-config-operator/machine-config-server-gjxgc" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.662146 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0db2f4be-492e-40be-84b1-3578f55c1efb-tmpfs\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.662191 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.662218 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nfrl\" (UniqueName: \"kubernetes.io/projected/0db2f4be-492e-40be-84b1-3578f55c1efb-kube-api-access-2nfrl\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.662238 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/303b471c-5851-4624-a1c6-5d8f826641b1-metrics-tls\") pod \"dns-operator-799b87ffcd-7855f\" (UID: \"303b471c-5851-4624-a1c6-5d8f826641b1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.662255 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-tmp\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: E0121 09:56:37.666088 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:38.166070491 +0000 UTC m=+113.834162169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771044 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771174 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0db2f4be-492e-40be-84b1-3578f55c1efb-srv-cert\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771202 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xww\" (UniqueName: \"kubernetes.io/projected/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-kube-api-access-p6xww\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771230 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-registration-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771246 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-292jh\" (UniqueName: \"kubernetes.io/projected/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-kube-api-access-292jh\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771265 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771285 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ngmkb\" (UniqueName: \"kubernetes.io/projected/1b25e062-a07b-4350-84c9-9247d3a0c144-kube-api-access-ngmkb\") pod \"multus-admission-controller-69db94689b-9jrsz\" (UID: \"1b25e062-a07b-4350-84c9-9247d3a0c144\") " pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771300 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771317 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b25e062-a07b-4350-84c9-9247d3a0c144-webhook-certs\") pod \"multus-admission-controller-69db94689b-9jrsz\" (UID: \"1b25e062-a07b-4350-84c9-9247d3a0c144\") " pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771333 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771347 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0db2f4be-492e-40be-84b1-3578f55c1efb-profile-collector-cert\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771368 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-tmp-dir\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771385 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-socket-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771404 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-mountpoint-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771423 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/303b471c-5851-4624-a1c6-5d8f826641b1-tmp-dir\") pod \"dns-operator-799b87ffcd-7855f\" (UID: \"303b471c-5851-4624-a1c6-5d8f826641b1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771443 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dkzgs\" (UniqueName: \"kubernetes.io/projected/303b471c-5851-4624-a1c6-5d8f826641b1-kube-api-access-dkzgs\") pod \"dns-operator-799b87ffcd-7855f\" (UID: \"303b471c-5851-4624-a1c6-5d8f826641b1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771459 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-ready\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771477 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771492 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-metrics-tls\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771512 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-plugins-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771623 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a1df92f8-e439-4da3-af25-cdbf8374d2da-certs\") pod \"machine-config-server-gjxgc\" (UID: \"a1df92f8-e439-4da3-af25-cdbf8374d2da\") " pod="openshift-machine-config-operator/machine-config-server-gjxgc" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771654 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8369fc8-80db-4a4f-9928-46a8acdb2128-cert\") pod \"ingress-canary-76tl2\" (UID: \"e8369fc8-80db-4a4f-9928-46a8acdb2128\") " pod="openshift-ingress-canary/ingress-canary-76tl2" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771676 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ss7l\" (UniqueName: \"kubernetes.io/projected/e8369fc8-80db-4a4f-9928-46a8acdb2128-kube-api-access-2ss7l\") pod \"ingress-canary-76tl2\" (UID: \"e8369fc8-80db-4a4f-9928-46a8acdb2128\") " pod="openshift-ingress-canary/ingress-canary-76tl2" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771701 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e7b694c-1e5a-4209-9baf-67cd7bc2f3af-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-g4flp\" (UID: \"0e7b694c-1e5a-4209-9baf-67cd7bc2f3af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771731 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-csi-data-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771751 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rb9kz\" (UniqueName: \"kubernetes.io/projected/7cdaf693-5dea-4260-bb45-209fcd54b53e-kube-api-access-rb9kz\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771774 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a1df92f8-e439-4da3-af25-cdbf8374d2da-node-bootstrap-token\") pod \"machine-config-server-gjxgc\" (UID: \"a1df92f8-e439-4da3-af25-cdbf8374d2da\") " pod="openshift-machine-config-operator/machine-config-server-gjxgc" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771790 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0db2f4be-492e-40be-84b1-3578f55c1efb-tmpfs\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771814 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2nfrl\" (UniqueName: \"kubernetes.io/projected/0db2f4be-492e-40be-84b1-3578f55c1efb-kube-api-access-2nfrl\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771829 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/303b471c-5851-4624-a1c6-5d8f826641b1-metrics-tls\") pod \"dns-operator-799b87ffcd-7855f\" (UID: \"303b471c-5851-4624-a1c6-5d8f826641b1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771845 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-tmp\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771860 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c82nr\" (UniqueName: \"kubernetes.io/projected/0e7b694c-1e5a-4209-9baf-67cd7bc2f3af-kube-api-access-c82nr\") pod \"cluster-samples-operator-6b564684c8-g4flp\" (UID: \"0e7b694c-1e5a-4209-9baf-67cd7bc2f3af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771878 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6j8zg\" (UniqueName: \"kubernetes.io/projected/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-kube-api-access-6j8zg\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771969 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-572s8\" (UniqueName: \"kubernetes.io/projected/a1df92f8-e439-4da3-af25-cdbf8374d2da-kube-api-access-572s8\") pod \"machine-config-server-gjxgc\" (UID: \"a1df92f8-e439-4da3-af25-cdbf8374d2da\") " pod="openshift-machine-config-operator/machine-config-server-gjxgc" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.771989 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-config-volume\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: E0121 09:56:37.772125 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:38.272109545 +0000 UTC m=+113.940201223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.773056 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-tmp\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.773321 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0db2f4be-492e-40be-84b1-3578f55c1efb-tmpfs\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.774235 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-tmp-dir\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.774638 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-plugins-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.774672 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.775814 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-csi-data-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.780000 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/303b471c-5851-4624-a1c6-5d8f826641b1-tmp-dir\") pod \"dns-operator-799b87ffcd-7855f\" (UID: \"303b471c-5851-4624-a1c6-5d8f826641b1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.783140 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e7b694c-1e5a-4209-9baf-67cd7bc2f3af-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-g4flp\" (UID: \"0e7b694c-1e5a-4209-9baf-67cd7bc2f3af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.784660 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-config-volume\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.784968 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-ready\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.785035 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.785454 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-mountpoint-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.785645 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-registration-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.785771 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cdaf693-5dea-4260-bb45-209fcd54b53e-socket-dir\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.787717 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-metrics-tls\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.787768 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0db2f4be-492e-40be-84b1-3578f55c1efb-profile-collector-cert\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.788413 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a1df92f8-e439-4da3-af25-cdbf8374d2da-node-bootstrap-token\") pod \"machine-config-server-gjxgc\" (UID: \"a1df92f8-e439-4da3-af25-cdbf8374d2da\") " pod="openshift-machine-config-operator/machine-config-server-gjxgc" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.791416 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b25e062-a07b-4350-84c9-9247d3a0c144-webhook-certs\") pod \"multus-admission-controller-69db94689b-9jrsz\" (UID: \"1b25e062-a07b-4350-84c9-9247d3a0c144\") " pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.792539 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/303b471c-5851-4624-a1c6-5d8f826641b1-metrics-tls\") pod \"dns-operator-799b87ffcd-7855f\" (UID: \"303b471c-5851-4624-a1c6-5d8f826641b1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.796270 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0db2f4be-492e-40be-84b1-3578f55c1efb-srv-cert\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.796374 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.802244 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.804223 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a1df92f8-e439-4da3-af25-cdbf8374d2da-certs\") pod \"machine-config-server-gjxgc\" (UID: \"a1df92f8-e439-4da3-af25-cdbf8374d2da\") " pod="openshift-machine-config-operator/machine-config-server-gjxgc" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.811886 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8369fc8-80db-4a4f-9928-46a8acdb2128-cert\") pod \"ingress-canary-76tl2\" (UID: \"e8369fc8-80db-4a4f-9928-46a8acdb2128\") " pod="openshift-ingress-canary/ingress-canary-76tl2" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.814375 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nfrl\" (UniqueName: \"kubernetes.io/projected/0db2f4be-492e-40be-84b1-3578f55c1efb-kube-api-access-2nfrl\") pod \"olm-operator-5cdf44d969-hnjk6\" (UID: \"0db2f4be-492e-40be-84b1-3578f55c1efb\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.835662 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j8zg\" (UniqueName: \"kubernetes.io/projected/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-kube-api-access-6j8zg\") pod \"cni-sysctl-allowlist-ds-bc4nv\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.859838 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngmkb\" (UniqueName: \"kubernetes.io/projected/1b25e062-a07b-4350-84c9-9247d3a0c144-kube-api-access-ngmkb\") pod \"multus-admission-controller-69db94689b-9jrsz\" (UID: \"1b25e062-a07b-4350-84c9-9247d3a0c144\") " pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.875394 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:37 crc kubenswrapper[5119]: E0121 09:56:37.875780 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:38.375766495 +0000 UTC m=+114.043858173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.888847 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66"] Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.888897 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-8nd58"] Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.890801 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6"] Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.899104 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-292jh\" (UniqueName: \"kubernetes.io/projected/85f163dc-d2c8-4d62-9fa2-48d75035cbfa-kube-api-access-292jh\") pod \"dns-default-8224m\" (UID: \"85f163dc-d2c8-4d62-9fa2-48d75035cbfa\") " pod="openshift-dns/dns-default-8224m" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.901962 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6xww\" (UniqueName: \"kubernetes.io/projected/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-kube-api-access-p6xww\") pod \"marketplace-operator-547dbd544d-z67hs\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.912862 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:37 crc kubenswrapper[5119]: W0121 09:56:37.920018 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcec067a0_6e27_4e3f_b03a_f37ffd10dd43.slice/crio-b8feb28f2d0da10a5903caec19f8f309586e69addaee28fe0e87990267284e49 WatchSource:0}: Error finding container b8feb28f2d0da10a5903caec19f8f309586e69addaee28fe0e87990267284e49: Status 404 returned error can't find the container with id b8feb28f2d0da10a5903caec19f8f309586e69addaee28fe0e87990267284e49 Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.924515 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-572s8\" (UniqueName: \"kubernetes.io/projected/a1df92f8-e439-4da3-af25-cdbf8374d2da-kube-api-access-572s8\") pod \"machine-config-server-gjxgc\" (UID: \"a1df92f8-e439-4da3-af25-cdbf8374d2da\") " pod="openshift-machine-config-operator/machine-config-server-gjxgc" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.962200 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb9kz\" (UniqueName: \"kubernetes.io/projected/7cdaf693-5dea-4260-bb45-209fcd54b53e-kube-api-access-rb9kz\") pod \"csi-hostpathplugin-qpbqz\" (UID: \"7cdaf693-5dea-4260-bb45-209fcd54b53e\") " pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.974737 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ss7l\" (UniqueName: \"kubernetes.io/projected/e8369fc8-80db-4a4f-9928-46a8acdb2128-kube-api-access-2ss7l\") pod \"ingress-canary-76tl2\" (UID: \"e8369fc8-80db-4a4f-9928-46a8acdb2128\") " pod="openshift-ingress-canary/ingress-canary-76tl2" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.981091 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:37 crc kubenswrapper[5119]: E0121 09:56:37.981260 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:38.481234812 +0000 UTC m=+114.149326490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.981687 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:37 crc kubenswrapper[5119]: E0121 09:56:37.982287 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:38.482276501 +0000 UTC m=+114.150368179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.994223 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" Jan 21 09:56:37 crc kubenswrapper[5119]: I0121 09:56:37.994853 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.000556 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c82nr\" (UniqueName: \"kubernetes.io/projected/0e7b694c-1e5a-4209-9baf-67cd7bc2f3af-kube-api-access-c82nr\") pod \"cluster-samples-operator-6b564684c8-g4flp\" (UID: \"0e7b694c-1e5a-4209-9baf-67cd7bc2f3af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.001076 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkzgs\" (UniqueName: \"kubernetes.io/projected/303b471c-5851-4624-a1c6-5d8f826641b1-kube-api-access-dkzgs\") pod \"dns-operator-799b87ffcd-7855f\" (UID: \"303b471c-5851-4624-a1c6-5d8f826641b1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.061537 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.072160 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.087280 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.090337 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:38.590114262 +0000 UTC m=+114.258205940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.090418 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8224m" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.090880 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.091252 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:38.591243843 +0000 UTC m=+114.259335511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.106797 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gjxgc" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.113768 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-6jw94"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.114334 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.126139 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-prvwt"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.126741 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.129385 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" Jan 21 09:56:38 crc kubenswrapper[5119]: W0121 09:56:38.129489 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1c5db5b_e8c1_4d79_aca9_10703c8e82db.slice/crio-0ff42f4a2f1c746ee5b28ffab462e2bc10c475a33f74b0ecf659f97975a209a3 WatchSource:0}: Error finding container 0ff42f4a2f1c746ee5b28ffab462e2bc10c475a33f74b0ecf659f97975a209a3: Status 404 returned error can't find the container with id 0ff42f4a2f1c746ee5b28ffab462e2bc10c475a33f74b0ecf659f97975a209a3 Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.130563 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-76tl2" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.131121 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.139387 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.143672 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.146327 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" event={"ID":"1ccf6a04-2820-4b99-9dbd-2e6d111b4fed","Type":"ContainerStarted","Data":"4b1388d167eb9a4feb8f4881955f4136c3b9e5991890c26ecb5fe1fc5594018a"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.147352 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.150925 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-jvrpf"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.164014 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.168740 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" event={"ID":"333793e1-92de-4fbe-83b5-26b64848c6af","Type":"ContainerStarted","Data":"22c4e7601f58c7b5c833b4a3205f328ab66595a2896465ee5bbe990c7ef40ed3"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.168783 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" event={"ID":"333793e1-92de-4fbe-83b5-26b64848c6af","Type":"ContainerStarted","Data":"78f780a34816281f2e8546c3b660eb732f9761e99338ed620dbbb991df3ad9c4"} Jan 21 09:56:38 crc kubenswrapper[5119]: W0121 09:56:38.179917 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19471fb3_19b2_42d4_967e_6b0620f686ce.slice/crio-8cc7f28d346b105f755431bb012b6b0e696e2ee54408abf232341166c01497a4 WatchSource:0}: Error finding container 8cc7f28d346b105f755431bb012b6b0e696e2ee54408abf232341166c01497a4: Status 404 returned error can't find the container with id 8cc7f28d346b105f755431bb012b6b0e696e2ee54408abf232341166c01497a4 Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.183586 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-8nd58" event={"ID":"fab52538-cd8b-408e-b571-f2dc516dc2a3","Type":"ContainerStarted","Data":"00290fcf4d3a898ce32391c966bf4d95768e97775292e3c716c46ecc572e69e3"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.191748 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.192022 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:38.692007075 +0000 UTC m=+114.360098753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.209815 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" event={"ID":"ee0294ff-f61f-492b-b738-fbbee8f757eb","Type":"ContainerStarted","Data":"49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.209855 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" event={"ID":"ee0294ff-f61f-492b-b738-fbbee8f757eb","Type":"ContainerStarted","Data":"b9bb6bb6502b2c156756fe1f28b9594041f889a6f90a1de6d8de4c4f64050de3"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.210596 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.219866 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" event={"ID":"49421ea1-6e3f-41b1-b0d6-e821cac2f8ab","Type":"ContainerStarted","Data":"52b2982167be98b347f25456562f838ed6202ab3ac4ab9733684c24e54899aa0"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.225243 5119 generic.go:358] "Generic (PLEG): container finished" podID="986b7816-8325-48f6-b5a5-2d51c9f31687" containerID="ff9884d23deb98b9a8f1e998cf9bd466d5979bd84f2f01a9be8acdd4b70d78fd" exitCode=0 Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.225417 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" event={"ID":"986b7816-8325-48f6-b5a5-2d51c9f31687","Type":"ContainerDied","Data":"ff9884d23deb98b9a8f1e998cf9bd466d5979bd84f2f01a9be8acdd4b70d78fd"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.225469 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" event={"ID":"986b7816-8325-48f6-b5a5-2d51c9f31687","Type":"ContainerStarted","Data":"c4f12574083bed72edb93ab08132b73456067656dfde2cca1bf92f81d292a624"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.231676 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" event={"ID":"65fa8c5a-91c4-411a-9586-2f893dfda634","Type":"ContainerStarted","Data":"a1bfd45e2126dc9177559a8ac93e87ddaeef6aafa9fb8356a030a6ff23492f79"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.231741 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" event={"ID":"65fa8c5a-91c4-411a-9586-2f893dfda634","Type":"ContainerStarted","Data":"a32cff3d197512221342193a05465a02468a88d10d736daa16a6e447ebdaf241"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.241856 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" event={"ID":"f53b6ab7-e57d-4f85-adef-9a60515f8f1f","Type":"ContainerStarted","Data":"fe499b7174c2bdcf92728788c1ebaa1347a73922495315eb8a10eb6fd6049e8b"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.241896 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" event={"ID":"f53b6ab7-e57d-4f85-adef-9a60515f8f1f","Type":"ContainerStarted","Data":"304cfd66cc9dd19e059433737eac2672837212cf9dc77dde1d44c8e2afc7e3a8"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.242549 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.264136 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6" event={"ID":"cec067a0-6e27-4e3f-b03a-f37ffd10dd43","Type":"ContainerStarted","Data":"b8feb28f2d0da10a5903caec19f8f309586e69addaee28fe0e87990267284e49"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.268704 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" event={"ID":"36ec96a0-85cc-4757-ac20-cff015ffbe19","Type":"ContainerStarted","Data":"2a07299ead54bfe5d59f71380d9104fb50ff4e2829f22582682afc77d7caa0dd"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.270724 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.284811 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.287559 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" event={"ID":"bfe4596e-cffd-4e61-b095-455eea1ed712","Type":"ContainerStarted","Data":"24570ab933fc3e5d0b85a06e1280ffec65b74baedfa8c18ac0b848626c2f609c"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.287598 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" event={"ID":"bfe4596e-cffd-4e61-b095-455eea1ed712","Type":"ContainerStarted","Data":"a3739318d1ef297fbe966b12bd3ed92a601409c1b580fd3e920e467e48fc4790"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.290662 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.293514 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.294754 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:38.79472668 +0000 UTC m=+114.462818378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.303791 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" event={"ID":"d758cf9c-d67a-46df-a626-14e4a6a92be8","Type":"ContainerStarted","Data":"d2335abadc7d916050645b0a0c3d70e010cb047d7c87530893ed073a6c0d7e3a"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.303825 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" event={"ID":"d758cf9c-d67a-46df-a626-14e4a6a92be8","Type":"ContainerStarted","Data":"7cb2425fb7c0c0e90fd4b2e940915001955aada5972bdaf56dcc714818f30346"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.303849 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" event={"ID":"d758cf9c-d67a-46df-a626-14e4a6a92be8","Type":"ContainerStarted","Data":"2962a6f3a4a8bebb90d539d78efa12ff6544f1124744a1ae720d11d4c4be57ba"} Jan 21 09:56:38 crc kubenswrapper[5119]: W0121 09:56:38.324047 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4c88372_61dd_4fb9_8bcf_7c51ec904dd8.slice/crio-fdf5d15a2079b5dd2ec04c0e5e0a4851ead420df67e7fb15c10de454632501b1 WatchSource:0}: Error finding container fdf5d15a2079b5dd2ec04c0e5e0a4851ead420df67e7fb15c10de454632501b1: Status 404 returned error can't find the container with id fdf5d15a2079b5dd2ec04c0e5e0a4851ead420df67e7fb15c10de454632501b1 Jan 21 09:56:38 crc kubenswrapper[5119]: W0121 09:56:38.346761 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod315dcf5a_c0ec_4778_9118_2f68422fcc17.slice/crio-a573ee329b0ac387f7051fcd2425fd5d1b8444e9ec2dfadae79e858e82358bdd WatchSource:0}: Error finding container a573ee329b0ac387f7051fcd2425fd5d1b8444e9ec2dfadae79e858e82358bdd: Status 404 returned error can't find the container with id a573ee329b0ac387f7051fcd2425fd5d1b8444e9ec2dfadae79e858e82358bdd Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.351677 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.354432 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.357061 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" event={"ID":"f4fe1ed8-46ec-4253-8371-144cad3c3573","Type":"ContainerStarted","Data":"36999ea8dbc1d4584062f4a752ff3a800dfd91b74849cd1b3a5f7f37336f7fae"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.357945 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.373822 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7tls5"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.380755 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" event={"ID":"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb","Type":"ContainerStarted","Data":"aae2d2143e7e66f329e21084470d72c887b9f55dffa0a54665f7856279512696"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.380795 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" event={"ID":"7d4bb4e5-bb28-41fd-8095-0392fd6b8afb","Type":"ContainerStarted","Data":"6023256fcbf7457c61bd5ad5d5f78b8236ecf6b1cab2547c14d626e6107454fc"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.395009 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.396387 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:38.896365426 +0000 UTC m=+114.564457104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.403329 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.408785 5119 generic.go:358] "Generic (PLEG): container finished" podID="9202c0b0-32fd-49a9-85ce-98c79744bfcf" containerID="fc492403c38b1ff9e80889f85af204e3b7666fe6c3d0e7affc7de1968798819c" exitCode=0 Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.409648 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" event={"ID":"9202c0b0-32fd-49a9-85ce-98c79744bfcf","Type":"ContainerDied","Data":"fc492403c38b1ff9e80889f85af204e3b7666fe6c3d0e7affc7de1968798819c"} Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.426469 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-2glqc"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.464267 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.499861 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.511055 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.011035961 +0000 UTC m=+114.679127639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.606014 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.606710 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.106537283 +0000 UTC m=+114.774628961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.607134 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.608731 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.107501648 +0000 UTC m=+114.775593326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.705089 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-9jrsz"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.708806 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z67hs"] Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.708114 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.208097946 +0000 UTC m=+114.876189624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.708053 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.709123 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.709549 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.209534234 +0000 UTC m=+114.877625912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.752432 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qpbqz"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.762730 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8224m"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.801060 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-7855f"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.815746 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.816229 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.316210636 +0000 UTC m=+114.984302314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: W0121 09:56:38.896818 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cdaf693_5dea_4260_bb45_209fcd54b53e.slice/crio-2cb9c9a4b83bf569481832216eea7543093d658f80a2fe2394b0b73edccf8a50 WatchSource:0}: Error finding container 2cb9c9a4b83bf569481832216eea7543093d658f80a2fe2394b0b73edccf8a50: Status 404 returned error can't find the container with id 2cb9c9a4b83bf569481832216eea7543093d658f80a2fe2394b0b73edccf8a50 Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.919004 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:38 crc kubenswrapper[5119]: E0121 09:56:38.919485 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.419464504 +0000 UTC m=+115.087556192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.924170 5119 ???:1] "http: TLS handshake error from 192.168.126.11:38120: no serving certificate available for the kubelet" Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.946199 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-76tl2"] Jan 21 09:56:38 crc kubenswrapper[5119]: W0121 09:56:38.951826 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85f163dc_d2c8_4d62_9fa2_48d75035cbfa.slice/crio-bdf44786d9e6ff716ae6268b563d83f76ec23d98570caa6c5ee3dd702e24e9ae WatchSource:0}: Error finding container bdf44786d9e6ff716ae6268b563d83f76ec23d98570caa6c5ee3dd702e24e9ae: Status 404 returned error can't find the container with id bdf44786d9e6ff716ae6268b563d83f76ec23d98570caa6c5ee3dd702e24e9ae Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.954665 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp"] Jan 21 09:56:38 crc kubenswrapper[5119]: I0121 09:56:38.985018 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.018788 5119 ???:1] "http: TLS handshake error from 192.168.126.11:38136: no serving certificate available for the kubelet" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.020396 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:39 crc kubenswrapper[5119]: E0121 09:56:39.021249 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.521205133 +0000 UTC m=+115.189296811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.059770 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.070691 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:39 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:39 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:39 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.071043 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.125762 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:39 crc kubenswrapper[5119]: E0121 09:56:39.126093 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.626079635 +0000 UTC m=+115.294171313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.128278 5119 ???:1] "http: TLS handshake error from 192.168.126.11:38152: no serving certificate available for the kubelet" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.221536 5119 ???:1] "http: TLS handshake error from 192.168.126.11:38154: no serving certificate available for the kubelet" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.226859 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:39 crc kubenswrapper[5119]: E0121 09:56:39.227434 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.727413103 +0000 UTC m=+115.395504781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.319940 5119 ???:1] "http: TLS handshake error from 192.168.126.11:38164: no serving certificate available for the kubelet" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.329464 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:39 crc kubenswrapper[5119]: E0121 09:56:39.329770 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.829758938 +0000 UTC m=+115.497850616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.398342 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" podStartSLOduration=95.39832213 podStartE2EDuration="1m35.39832213s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:39.395749261 +0000 UTC m=+115.063840939" watchObservedRunningTime="2026-01-21 09:56:39.39832213 +0000 UTC m=+115.066413808" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.399631 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-4dmmv" podStartSLOduration=96.399625745 podStartE2EDuration="1m36.399625745s" podCreationTimestamp="2026-01-21 09:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:39.356959205 +0000 UTC m=+115.025050883" watchObservedRunningTime="2026-01-21 09:56:39.399625745 +0000 UTC m=+115.067717423" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.430668 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:39 crc kubenswrapper[5119]: E0121 09:56:39.431222 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:39.931205219 +0000 UTC m=+115.599296897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.439040 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" podStartSLOduration=95.439026628 podStartE2EDuration="1m35.439026628s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:39.43870136 +0000 UTC m=+115.106793038" watchObservedRunningTime="2026-01-21 09:56:39.439026628 +0000 UTC m=+115.107118306" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.439868 5119 ???:1] "http: TLS handshake error from 192.168.126.11:38178: no serving certificate available for the kubelet" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.487387 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podStartSLOduration=95.48737296 podStartE2EDuration="1m35.48737296s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:39.486720843 +0000 UTC m=+115.154812531" watchObservedRunningTime="2026-01-21 09:56:39.48737296 +0000 UTC m=+115.155464638" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.497267 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w7cjs"] Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.502786 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.512191 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w7cjs"] Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.517770 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.532831 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" event={"ID":"197acdb1-438c-41ba-8b8d-a78197486cd7","Type":"ContainerStarted","Data":"bbf28e0f321a47ef60df0dc784b8d3c55d86faceb3b6a0c671bd6acbad68d17a"} Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.532871 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" event={"ID":"197acdb1-438c-41ba-8b8d-a78197486cd7","Type":"ContainerStarted","Data":"b1791a834ed2d22d16c33439f15beab73cdfc759bd43d314486bc7d9a6f1b0e5"} Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.534630 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:39 crc kubenswrapper[5119]: E0121 09:56:39.534949 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:40.034937641 +0000 UTC m=+115.703029319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.582701 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" event={"ID":"7cdaf693-5dea-4260-bb45-209fcd54b53e","Type":"ContainerStarted","Data":"2cb9c9a4b83bf569481832216eea7543093d658f80a2fe2394b0b73edccf8a50"} Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.605338 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-pxxr9" podStartSLOduration=96.605320011 podStartE2EDuration="1m36.605320011s" podCreationTimestamp="2026-01-21 09:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:39.57866705 +0000 UTC m=+115.246758728" watchObservedRunningTime="2026-01-21 09:56:39.605320011 +0000 UTC m=+115.273411679" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.606753 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6z5rg" podStartSLOduration=95.60674498 podStartE2EDuration="1m35.60674498s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:39.603842433 +0000 UTC m=+115.271934111" watchObservedRunningTime="2026-01-21 09:56:39.60674498 +0000 UTC m=+115.274836648" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.626110 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" event={"ID":"303b471c-5851-4624-a1c6-5d8f826641b1","Type":"ContainerStarted","Data":"3d2df24b1f2e7accc38afdd09213083d5429da1640f0be01282136ad66c37eb7"} Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.635365 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.635474 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-catalog-content\") pod \"certified-operators-w7cjs\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.635527 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswtq\" (UniqueName: \"kubernetes.io/projected/49467157-6fc6-4f0b-b833-1b95a6068d7e-kube-api-access-vswtq\") pod \"certified-operators-w7cjs\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.635547 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-utilities\") pod \"certified-operators-w7cjs\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:56:39 crc kubenswrapper[5119]: E0121 09:56:39.635686 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:40.135668323 +0000 UTC m=+115.803760001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.639564 5119 ???:1] "http: TLS handshake error from 192.168.126.11:38190: no serving certificate available for the kubelet" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.647235 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" podStartSLOduration=95.647040896 podStartE2EDuration="1m35.647040896s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:39.645039803 +0000 UTC m=+115.313131481" watchObservedRunningTime="2026-01-21 09:56:39.647040896 +0000 UTC m=+115.315132574" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.680217 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9gxxh"] Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.694260 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gxxh"] Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.694409 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.717364 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.725864 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" event={"ID":"984cb670-8e15-4092-bf42-f3c6337e1cad","Type":"ContainerStarted","Data":"6bfcf711b737c3815df3b8bd74797eabf6846fbe7c6d7fa41cca941d67cec650"} Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.739567 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vswtq\" (UniqueName: \"kubernetes.io/projected/49467157-6fc6-4f0b-b833-1b95a6068d7e-kube-api-access-vswtq\") pod \"certified-operators-w7cjs\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.739625 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-utilities\") pod \"certified-operators-w7cjs\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.739889 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.740001 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-catalog-content\") pod \"certified-operators-w7cjs\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.740004 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-utilities\") pod \"certified-operators-w7cjs\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:56:39 crc kubenswrapper[5119]: E0121 09:56:39.740242 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:40.240228306 +0000 UTC m=+115.908319974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.740664 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-catalog-content\") pod \"certified-operators-w7cjs\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.742948 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" event={"ID":"47512efb-ea0a-42ac-a2c6-fd3017df0ce1","Type":"ContainerStarted","Data":"a303543454334d6cb14e375c438f5a88322f9a082948f5fd66752bd8cf5e814c"} Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.794028 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vswtq\" (UniqueName: \"kubernetes.io/projected/49467157-6fc6-4f0b-b833-1b95a6068d7e-kube-api-access-vswtq\") pod \"certified-operators-w7cjs\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.813988 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mgn7f" podStartSLOduration=95.813973188 podStartE2EDuration="1m35.813973188s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:39.813155105 +0000 UTC m=+115.481246783" watchObservedRunningTime="2026-01-21 09:56:39.813973188 +0000 UTC m=+115.482064866" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.833815 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" event={"ID":"36ec96a0-85cc-4757-ac20-cff015ffbe19","Type":"ContainerStarted","Data":"7610adc8e01df34d49ebbbaeb9ed9441c0d2b872e78f215a1050d854010d3058"} Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.849095 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.849284 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-catalog-content\") pod \"community-operators-9gxxh\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.849381 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljf6z\" (UniqueName: \"kubernetes.io/projected/9bccb111-fc78-420c-bb88-788974b0d7d5-kube-api-access-ljf6z\") pod \"community-operators-9gxxh\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.849431 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-utilities\") pod \"community-operators-9gxxh\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:56:39 crc kubenswrapper[5119]: E0121 09:56:39.849536 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:40.349520407 +0000 UTC m=+116.017612085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.868535 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-h4bjc" podStartSLOduration=95.868509524 podStartE2EDuration="1m35.868509524s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:39.867908399 +0000 UTC m=+115.536000087" watchObservedRunningTime="2026-01-21 09:56:39.868509524 +0000 UTC m=+115.536601202" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.875757 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" event={"ID":"1b25e062-a07b-4350-84c9-9247d3a0c144","Type":"ContainerStarted","Data":"df030807a31221baaa4c2eef144f2f57ab7c6e6cf30a764ea58f3e6f91e4b2df"} Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.891275 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wn7cq"] Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.907222 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.909135 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.916327 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wn7cq"] Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.950387 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-catalog-content\") pod \"community-operators-9gxxh\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.950732 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.950767 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljf6z\" (UniqueName: \"kubernetes.io/projected/9bccb111-fc78-420c-bb88-788974b0d7d5-kube-api-access-ljf6z\") pod \"community-operators-9gxxh\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.950806 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-utilities\") pod \"community-operators-9gxxh\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:56:39 crc kubenswrapper[5119]: E0121 09:56:39.976140 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:40.476122911 +0000 UTC m=+116.144214589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.977299 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-utilities\") pod \"community-operators-9gxxh\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.977520 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-catalog-content\") pod \"community-operators-9gxxh\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.979067 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-6jw94" event={"ID":"a1c5db5b-e8c1-4d79-aca9-10703c8e82db","Type":"ContainerStarted","Data":"32e688b58049850ff075881e023784dcf143e5b4d2712ebd714a1bbfcc7a8248"} Jan 21 09:56:39 crc kubenswrapper[5119]: I0121 09:56:39.979120 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-6jw94" event={"ID":"a1c5db5b-e8c1-4d79-aca9-10703c8e82db","Type":"ContainerStarted","Data":"0ff42f4a2f1c746ee5b28ffab462e2bc10c475a33f74b0ecf659f97975a209a3"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.008137 5119 ???:1] "http: TLS handshake error from 192.168.126.11:38196: no serving certificate available for the kubelet" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.033283 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-6kqm2" podStartSLOduration=96.033267947 podStartE2EDuration="1m36.033267947s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:40.031658764 +0000 UTC m=+115.699750442" watchObservedRunningTime="2026-01-21 09:56:40.033267947 +0000 UTC m=+115.701359615" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.034654 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-fk2f6" podStartSLOduration=96.034647204 podStartE2EDuration="1m36.034647204s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:39.978982337 +0000 UTC m=+115.647074015" watchObservedRunningTime="2026-01-21 09:56:40.034647204 +0000 UTC m=+115.702738882" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.046272 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljf6z\" (UniqueName: \"kubernetes.io/projected/9bccb111-fc78-420c-bb88-788974b0d7d5-kube-api-access-ljf6z\") pod \"community-operators-9gxxh\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.052450 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.052630 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-catalog-content\") pod \"certified-operators-wn7cq\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.052708 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-utilities\") pod \"certified-operators-wn7cq\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.052725 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf9p7\" (UniqueName: \"kubernetes.io/projected/3f4078de-237b-4252-be46-f0b89d21c8ed-kube-api-access-xf9p7\") pod \"certified-operators-wn7cq\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:56:40 crc kubenswrapper[5119]: E0121 09:56:40.053530 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:40.553514258 +0000 UTC m=+116.221605926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.072873 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.087022 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:40 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:40 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:40 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.087084 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.087419 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rjq8j" podStartSLOduration=96.086977452 podStartE2EDuration="1m36.086977452s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:40.061719407 +0000 UTC m=+115.729811085" watchObservedRunningTime="2026-01-21 09:56:40.086977452 +0000 UTC m=+115.755069130" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.096297 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cgvgn"] Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.113192 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgvgn"] Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.113361 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.117085 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" event={"ID":"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b","Type":"ContainerStarted","Data":"2751cc9101704dcf4951f6960405fb0d28010502eb0b58aaad831044104aab69"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.162567 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-utilities\") pod \"certified-operators-wn7cq\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.162993 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xf9p7\" (UniqueName: \"kubernetes.io/projected/3f4078de-237b-4252-be46-f0b89d21c8ed-kube-api-access-xf9p7\") pod \"certified-operators-wn7cq\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.163094 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-catalog-content\") pod \"certified-operators-wn7cq\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.163170 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.163245 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" event={"ID":"a8053129-cc10-477e-b44f-52c846d9d1ce","Type":"ContainerStarted","Data":"c03370877b795eb6f26009602d33f55bac949b7a193320de9933f4f25e8ec710"} Jan 21 09:56:40 crc kubenswrapper[5119]: E0121 09:56:40.163597 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:40.66358483 +0000 UTC m=+116.331676508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.166065 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-catalog-content\") pod \"certified-operators-wn7cq\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.167795 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-utilities\") pod \"certified-operators-wn7cq\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.205398 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf9p7\" (UniqueName: \"kubernetes.io/projected/3f4078de-237b-4252-be46-f0b89d21c8ed-kube-api-access-xf9p7\") pod \"certified-operators-wn7cq\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.220414 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" event={"ID":"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0","Type":"ContainerStarted","Data":"404b3f1a06e70d9a26a8e6c5a12ce90713a4017e92203e0e56236ebf8eb25212"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.225581 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bkq66" podStartSLOduration=96.225564596 podStartE2EDuration="1m36.225564596s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:40.22422635 +0000 UTC m=+115.892318018" watchObservedRunningTime="2026-01-21 09:56:40.225564596 +0000 UTC m=+115.893656284" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.226232 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" event={"ID":"09592b3a-cb47-43ee-97e7-f058888af3ff","Type":"ContainerStarted","Data":"1b5843bc5ac90919dfbd85ae31e9b0d85d4823a54fe2d349ff27ece7751e05f6"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.248565 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-6jw94" podStartSLOduration=96.24854645 podStartE2EDuration="1m36.24854645s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:40.24743467 +0000 UTC m=+115.915526348" watchObservedRunningTime="2026-01-21 09:56:40.24854645 +0000 UTC m=+115.916638128" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.265193 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.265329 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-catalog-content\") pod \"community-operators-cgvgn\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.265382 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-utilities\") pod \"community-operators-cgvgn\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.265400 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl4kz\" (UniqueName: \"kubernetes.io/projected/eeb3fd25-b829-4977-aac0-aa2539bf13d0-kube-api-access-pl4kz\") pod \"community-operators-cgvgn\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:56:40 crc kubenswrapper[5119]: E0121 09:56:40.265548 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:40.765533434 +0000 UTC m=+116.433625112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.297514 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.366554 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" event={"ID":"986b7816-8325-48f6-b5a5-2d51c9f31687","Type":"ContainerStarted","Data":"1da8ff8f6386e3f5efeb05c9881716cb33938d8fc557ec679c063c0f62052629"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.367955 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.367986 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-catalog-content\") pod \"community-operators-cgvgn\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.368037 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-utilities\") pod \"community-operators-cgvgn\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.368055 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pl4kz\" (UniqueName: \"kubernetes.io/projected/eeb3fd25-b829-4977-aac0-aa2539bf13d0-kube-api-access-pl4kz\") pod \"community-operators-cgvgn\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:56:40 crc kubenswrapper[5119]: E0121 09:56:40.368523 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:40.868511406 +0000 UTC m=+116.536603084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.369027 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-catalog-content\") pod \"community-operators-cgvgn\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.369142 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-utilities\") pod \"community-operators-cgvgn\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.426220 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl4kz\" (UniqueName: \"kubernetes.io/projected/eeb3fd25-b829-4977-aac0-aa2539bf13d0-kube-api-access-pl4kz\") pod \"community-operators-cgvgn\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.448033 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.469093 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:40 crc kubenswrapper[5119]: E0121 09:56:40.469380 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:40.969364341 +0000 UTC m=+116.637456019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.509844 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" event={"ID":"b88913b0-a37a-46b4-9c43-4e2e22f306d5","Type":"ContainerStarted","Data":"4bc83cf1e942a0f357a0d0cc7d4cf50790954f423ed5a3f11c989ac93d1241a7"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.509891 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" event={"ID":"b88913b0-a37a-46b4-9c43-4e2e22f306d5","Type":"ContainerStarted","Data":"d3edb89c9d8e68ab80fe48287d3b314cfe2dc2db5591e3402a2780a7fac3f2cf"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.565388 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" podStartSLOduration=96.565372326 podStartE2EDuration="1m36.565372326s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:40.563487426 +0000 UTC m=+116.231579104" watchObservedRunningTime="2026-01-21 09:56:40.565372326 +0000 UTC m=+116.233464004" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.575293 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:40 crc kubenswrapper[5119]: E0121 09:56:40.575574 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:41.075561459 +0000 UTC m=+116.743653137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.659912 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-8nd58" podStartSLOduration=96.659891832 podStartE2EDuration="1m36.659891832s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:40.658949426 +0000 UTC m=+116.327041104" watchObservedRunningTime="2026-01-21 09:56:40.659891832 +0000 UTC m=+116.327983510" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.671105 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-8nd58" event={"ID":"fab52538-cd8b-408e-b571-f2dc516dc2a3","Type":"ContainerStarted","Data":"00ee80bce109dfba472d159e1fbe46d57a76557ad178b236715db9eceef1ab93"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.676641 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:40 crc kubenswrapper[5119]: E0121 09:56:40.677581 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:41.177565624 +0000 UTC m=+116.845657302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.725004 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gjxgc" event={"ID":"a1df92f8-e439-4da3-af25-cdbf8374d2da","Type":"ContainerStarted","Data":"1af72ba11e2ffe762db6b483e98b1a930e3cc9592f3521543e8bb5b5a9e85759"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.742018 5119 ???:1] "http: TLS handshake error from 192.168.126.11:38204: no serving certificate available for the kubelet" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.778577 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:40 crc kubenswrapper[5119]: E0121 09:56:40.779559 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:41.279546919 +0000 UTC m=+116.947638597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.872862 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" event={"ID":"6169753a-a446-4d39-85c2-01422f667bde","Type":"ContainerStarted","Data":"97889de93df3b8bbe7c4c93693547a3a7a2dc6450d45a762c4c1912437ccfca0"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.873281 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" event={"ID":"6169753a-a446-4d39-85c2-01422f667bde","Type":"ContainerStarted","Data":"1ee8e45a0ea7fa72647fc4aa344c1d0e61f252909f1f3cedae1d100e6c4a400e"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.880368 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:40 crc kubenswrapper[5119]: E0121 09:56:40.880772 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:41.380726804 +0000 UTC m=+117.048818482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.882423 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.909837 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-76tl2" event={"ID":"e8369fc8-80db-4a4f-9928-46a8acdb2128","Type":"ContainerStarted","Data":"c08acd1555de490ce9931dc0e06bc29166665febf02ca1507cf9381899334193"} Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.940666 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" podStartSLOduration=96.940619614 podStartE2EDuration="1m36.940619614s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:40.940367617 +0000 UTC m=+116.608459295" watchObservedRunningTime="2026-01-21 09:56:40.940619614 +0000 UTC m=+116.608711292" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.942644 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-gjxgc" podStartSLOduration=7.942597597 podStartE2EDuration="7.942597597s" podCreationTimestamp="2026-01-21 09:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:40.774922216 +0000 UTC m=+116.443013894" watchObservedRunningTime="2026-01-21 09:56:40.942597597 +0000 UTC m=+116.610689275" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.975375 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-76tl2" podStartSLOduration=7.975355781 podStartE2EDuration="7.975355781s" podCreationTimestamp="2026-01-21 09:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:40.971043477 +0000 UTC m=+116.639135155" watchObservedRunningTime="2026-01-21 09:56:40.975355781 +0000 UTC m=+116.643447459" Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.981847 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:40 crc kubenswrapper[5119]: E0121 09:56:40.983896 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:41.483884279 +0000 UTC m=+117.151975957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:40 crc kubenswrapper[5119]: I0121 09:56:40.992515 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" event={"ID":"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151","Type":"ContainerStarted","Data":"ede4d4bcb9213c78d956746bb7c74ce07144a4b58a24777380e7bb1478e64fe0"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.022834 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-jvrpf" event={"ID":"f4f6fb51-60b9-4dcd-b79a-ebe933c83555","Type":"ContainerStarted","Data":"81ae7c21e640a78a62418a9cf0d9450a6f4818949e36e8480c77a1f465d555ac"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.023060 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-jvrpf" event={"ID":"f4f6fb51-60b9-4dcd-b79a-ebe933c83555","Type":"ContainerStarted","Data":"e3a53aebb2a920326bd795cdf01e3b30ff60edaeea250503227cfc01bf1ad207"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.023308 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.077197 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6" event={"ID":"cec067a0-6e27-4e3f-b03a-f37ffd10dd43","Type":"ContainerStarted","Data":"fb32ac52916e164f4bbf2b729c9ca5bb384fb486e66712ca159a9e03bd07c0cd"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.082828 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:41 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:41 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:41 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.082892 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.083636 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:41 crc kubenswrapper[5119]: E0121 09:56:41.084497 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:41.584469937 +0000 UTC m=+117.252561615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.099905 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-jvrpf" podStartSLOduration=97.09988604 podStartE2EDuration="1m37.09988604s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:41.059734466 +0000 UTC m=+116.727826144" watchObservedRunningTime="2026-01-21 09:56:41.09988604 +0000 UTC m=+116.767977718" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.118556 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" event={"ID":"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8","Type":"ContainerStarted","Data":"fdf5d15a2079b5dd2ec04c0e5e0a4851ead420df67e7fb15c10de454632501b1"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.119182 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.185510 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.186044 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" event={"ID":"4c82a029-666a-49b5-8c4c-e8956a23303a","Type":"ContainerStarted","Data":"e49cccdaca5c615a067e2af15140f9eb5f9692924429b9723cb0d7e919a00b86"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.186116 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" event={"ID":"4c82a029-666a-49b5-8c4c-e8956a23303a","Type":"ContainerStarted","Data":"7d1f2ca1152a790660b9f0b00e3a4bb0110d441777e18480631a267b59a17a96"} Jan 21 09:56:41 crc kubenswrapper[5119]: E0121 09:56:41.186781 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:41.686768241 +0000 UTC m=+117.354859919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.203907 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" podStartSLOduration=8.203881258 podStartE2EDuration="8.203881258s" podCreationTimestamp="2026-01-21 09:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:41.197047535 +0000 UTC m=+116.865139233" watchObservedRunningTime="2026-01-21 09:56:41.203881258 +0000 UTC m=+116.871972926" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.204463 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-dc2k6" podStartSLOduration=97.204459284 podStartE2EDuration="1m37.204459284s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:41.115858436 +0000 UTC m=+116.783950114" watchObservedRunningTime="2026-01-21 09:56:41.204459284 +0000 UTC m=+116.872550962" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.250459 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.271395 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" event={"ID":"0db2f4be-492e-40be-84b1-3578f55c1efb","Type":"ContainerStarted","Data":"3c1b6c6de5c45acae6daf6ea2483555fa6424f71c0356d17145f867ed6e26cbb"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.271439 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" event={"ID":"0db2f4be-492e-40be-84b1-3578f55c1efb","Type":"ContainerStarted","Data":"19462b24cec74990f6edb1cf82be1bea9c73d57a5df662ec6d2213e6d9eb915c"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.272467 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.282671 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-vkvvq" podStartSLOduration=97.282643903 podStartE2EDuration="1m37.282643903s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:41.233495699 +0000 UTC m=+116.901587377" watchObservedRunningTime="2026-01-21 09:56:41.282643903 +0000 UTC m=+116.950735601" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.289411 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:41 crc kubenswrapper[5119]: E0121 09:56:41.290341 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:41.790326339 +0000 UTC m=+117.458418017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.309175 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.332341 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" event={"ID":"19471fb3-19b2-42d4-967e-6b0620f686ce","Type":"ContainerStarted","Data":"3f54ef58684c8620ee015bcf351b5720e27adf81a0b9ed4eab40c85acbcccfd7"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.332380 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" event={"ID":"19471fb3-19b2-42d4-967e-6b0620f686ce","Type":"ContainerStarted","Data":"8cc7f28d346b105f755431bb012b6b0e696e2ee54408abf232341166c01497a4"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.333816 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-hnjk6" podStartSLOduration=97.33379745 podStartE2EDuration="1m37.33379745s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:41.333149333 +0000 UTC m=+117.001241011" watchObservedRunningTime="2026-01-21 09:56:41.33379745 +0000 UTC m=+117.001889128" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.361919 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" event={"ID":"0e7b694c-1e5a-4209-9baf-67cd7bc2f3af","Type":"ContainerStarted","Data":"75be577d6595777fd4a83a1769eee3c5bfab3a15d49fa960ac3fa135b85b4b47"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.379313 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8224m" event={"ID":"85f163dc-d2c8-4d62-9fa2-48d75035cbfa","Type":"ContainerStarted","Data":"bdf44786d9e6ff716ae6268b563d83f76ec23d98570caa6c5ee3dd702e24e9ae"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.387267 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-prvwt" event={"ID":"d91bd19e-1fee-475c-a8ff-ee1014086695","Type":"ContainerStarted","Data":"0cc772073bf7a0bd4d78b9f75605b2cf930b1ac2dab607a801196532e0c6ef36"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.387303 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-prvwt" event={"ID":"d91bd19e-1fee-475c-a8ff-ee1014086695","Type":"ContainerStarted","Data":"b15c600db1b08bf7e8acf9da04176cb5694c6e16013f09c38c1e917ad60a3ac3"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.388007 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-prvwt" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.392721 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:41 crc kubenswrapper[5119]: E0121 09:56:41.394282 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:41.894265666 +0000 UTC m=+117.562357344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.411853 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" podStartSLOduration=97.411839355 podStartE2EDuration="1m37.411839355s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:41.374552989 +0000 UTC m=+117.042644667" watchObservedRunningTime="2026-01-21 09:56:41.411839355 +0000 UTC m=+117.079931033" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.415934 5119 patch_prober.go:28] interesting pod/downloads-747b44746d-prvwt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.415993 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-prvwt" podUID="d91bd19e-1fee-475c-a8ff-ee1014086695" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.475289 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" event={"ID":"315dcf5a-c0ec-4778-9118-2f68422fcc17","Type":"ContainerStarted","Data":"a573ee329b0ac387f7051fcd2425fd5d1b8444e9ec2dfadae79e858e82358bdd"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.480359 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z" event={"ID":"637fb734-7cb1-46f5-a282-438e701620d5","Type":"ContainerStarted","Data":"afebd01084af453069eea4c6a9f3d16fbe3eebed440e9d2135d4d711860ec9e5"} Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.501291 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:41 crc kubenswrapper[5119]: E0121 09:56:41.502282 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:42.002264131 +0000 UTC m=+117.670355809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.506481 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" podStartSLOduration=97.506468294 podStartE2EDuration="1m37.506468294s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:41.506107555 +0000 UTC m=+117.174199233" watchObservedRunningTime="2026-01-21 09:56:41.506468294 +0000 UTC m=+117.174559972" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.507510 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-prvwt" podStartSLOduration=97.507505601 podStartE2EDuration="1m37.507505601s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:41.456404716 +0000 UTC m=+117.124496384" watchObservedRunningTime="2026-01-21 09:56:41.507505601 +0000 UTC m=+117.175597279" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.593318 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z" podStartSLOduration=97.593299534 podStartE2EDuration="1m37.593299534s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:41.534353459 +0000 UTC m=+117.202445147" watchObservedRunningTime="2026-01-21 09:56:41.593299534 +0000 UTC m=+117.261391212" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.595585 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bc4nv"] Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.602669 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:41 crc kubenswrapper[5119]: E0121 09:56:41.609274 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:42.109258371 +0000 UTC m=+117.777350049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.683282 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j4wln"] Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.686859 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.689906 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.693010 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4wln"] Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.703468 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w7cjs"] Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.703811 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:41 crc kubenswrapper[5119]: E0121 09:56:41.704362 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:42.204321991 +0000 UTC m=+117.872413679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.806184 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.806237 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-utilities\") pod \"redhat-marketplace-j4wln\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.806261 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-catalog-content\") pod \"redhat-marketplace-j4wln\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.806325 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l248k\" (UniqueName: \"kubernetes.io/projected/07776dee-a157-4c69-ae94-c63a101a84f2-kube-api-access-l248k\") pod \"redhat-marketplace-j4wln\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:56:41 crc kubenswrapper[5119]: E0121 09:56:41.806637 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:42.306620785 +0000 UTC m=+117.974712463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.830367 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgvgn"] Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.870730 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gxxh"] Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.882748 5119 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-fs8fn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.882830 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" podUID="6169753a-a446-4d39-85c2-01422f667bde" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.907341 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.907466 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l248k\" (UniqueName: \"kubernetes.io/projected/07776dee-a157-4c69-ae94-c63a101a84f2-kube-api-access-l248k\") pod \"redhat-marketplace-j4wln\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.907547 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-utilities\") pod \"redhat-marketplace-j4wln\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.907568 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-catalog-content\") pod \"redhat-marketplace-j4wln\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.907921 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-catalog-content\") pod \"redhat-marketplace-j4wln\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:56:41 crc kubenswrapper[5119]: E0121 09:56:41.908132 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:42.408113237 +0000 UTC m=+118.076204915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.908140 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-utilities\") pod \"redhat-marketplace-j4wln\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.931999 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wn7cq"] Jan 21 09:56:41 crc kubenswrapper[5119]: I0121 09:56:41.947490 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l248k\" (UniqueName: \"kubernetes.io/projected/07776dee-a157-4c69-ae94-c63a101a84f2-kube-api-access-l248k\") pod \"redhat-marketplace-j4wln\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.008758 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:42 crc kubenswrapper[5119]: E0121 09:56:42.009099 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:42.509085505 +0000 UTC m=+118.177177183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.025679 5119 patch_prober.go:28] interesting pod/console-operator-67c89758df-jvrpf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.025907 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-jvrpf" podUID="f4f6fb51-60b9-4dcd-b79a-ebe933c83555" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.064835 5119 ???:1] "http: TLS handshake error from 192.168.126.11:40638: no serving certificate available for the kubelet" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.066964 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:42 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:42 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:42 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.067054 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:42 crc kubenswrapper[5119]: W0121 09:56:42.070511 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4078de_237b_4252_be46_f0b89d21c8ed.slice/crio-2dbe03a795a16ceeac08bbd07014ea303104947c6fe8d94027a07ca2f3317166 WatchSource:0}: Error finding container 2dbe03a795a16ceeac08bbd07014ea303104947c6fe8d94027a07ca2f3317166: Status 404 returned error can't find the container with id 2dbe03a795a16ceeac08bbd07014ea303104947c6fe8d94027a07ca2f3317166 Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.075218 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zm2f7"] Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.088880 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.104082 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.105968 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zm2f7"] Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.110343 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:42 crc kubenswrapper[5119]: E0121 09:56:42.110937 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:42.610919806 +0000 UTC m=+118.279011484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.212373 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdn8m\" (UniqueName: \"kubernetes.io/projected/a90aed0b-2281-4055-843d-678b06d52325-kube-api-access-cdn8m\") pod \"redhat-marketplace-zm2f7\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.212633 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-utilities\") pod \"redhat-marketplace-zm2f7\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.212680 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-catalog-content\") pod \"redhat-marketplace-zm2f7\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.212725 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:42 crc kubenswrapper[5119]: E0121 09:56:42.213014 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:42.712999694 +0000 UTC m=+118.381091372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.314589 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.314867 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cdn8m\" (UniqueName: \"kubernetes.io/projected/a90aed0b-2281-4055-843d-678b06d52325-kube-api-access-cdn8m\") pod \"redhat-marketplace-zm2f7\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.314893 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-utilities\") pod \"redhat-marketplace-zm2f7\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.314933 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-catalog-content\") pod \"redhat-marketplace-zm2f7\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.315321 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-catalog-content\") pod \"redhat-marketplace-zm2f7\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:56:42 crc kubenswrapper[5119]: E0121 09:56:42.315383 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:42.81536771 +0000 UTC m=+118.483459388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.315984 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-utilities\") pod \"redhat-marketplace-zm2f7\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.342415 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdn8m\" (UniqueName: \"kubernetes.io/projected/a90aed0b-2281-4055-843d-678b06d52325-kube-api-access-cdn8m\") pod \"redhat-marketplace-zm2f7\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.415775 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:42 crc kubenswrapper[5119]: E0121 09:56:42.416338 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:42.916325347 +0000 UTC m=+118.584417025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.452846 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.514389 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-x6xfs" event={"ID":"19471fb3-19b2-42d4-967e-6b0620f686ce","Type":"ContainerStarted","Data":"5b9ba98c7fa47df8f1540b70c51b6998a1a0d68fec15c357f8f7a57cde9b559f"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.522242 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:42 crc kubenswrapper[5119]: E0121 09:56:42.522341 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.022322509 +0000 UTC m=+118.690414187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.522576 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:42 crc kubenswrapper[5119]: E0121 09:56:42.522841 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.022827773 +0000 UTC m=+118.690919451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.551289 5119 generic.go:358] "Generic (PLEG): container finished" podID="49467157-6fc6-4f0b-b833-1b95a6068d7e" containerID="adc8977be75dc7f29782caedded6c37a9bf902f0ee05953fd79482207ca167a5" exitCode=0 Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.551388 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cjs" event={"ID":"49467157-6fc6-4f0b-b833-1b95a6068d7e","Type":"ContainerDied","Data":"adc8977be75dc7f29782caedded6c37a9bf902f0ee05953fd79482207ca167a5"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.551415 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cjs" event={"ID":"49467157-6fc6-4f0b-b833-1b95a6068d7e","Type":"ContainerStarted","Data":"ead29c4becaf9684ee661545200b94f74f075e9c4115d493b670e31d942033a3"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.555227 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" event={"ID":"0e7b694c-1e5a-4209-9baf-67cd7bc2f3af","Type":"ContainerStarted","Data":"d2bcf5156715529f9f382b5f83744f19b1a618fbdcbb9933060509b7880816aa"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.555258 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" event={"ID":"0e7b694c-1e5a-4209-9baf-67cd7bc2f3af","Type":"ContainerStarted","Data":"ef4cdd319574aa7f4ee06c0c280303d978779bb0b6b759bc1ff99ab59864f732"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.610358 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8224m" event={"ID":"85f163dc-d2c8-4d62-9fa2-48d75035cbfa","Type":"ContainerStarted","Data":"f579ffea3acb8c98b627289df3143bfec5d95840c8ffa7e82e877921f15dc09b"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.611596 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8224m" event={"ID":"85f163dc-d2c8-4d62-9fa2-48d75035cbfa","Type":"ContainerStarted","Data":"8bb5eb5b2bd738ab023eaeae8decaa0ec4e9e22e311810e1915a7ba97f5a0ab5"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.611688 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-8224m" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.623911 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.624089 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgvgn" event={"ID":"eeb3fd25-b829-4977-aac0-aa2539bf13d0","Type":"ContainerStarted","Data":"dd8c4b8ccfc14e8cd76f499072f775757b491d23cfb2c9179e4a7c5f9ea07658"} Jan 21 09:56:42 crc kubenswrapper[5119]: E0121 09:56:42.624279 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.124258264 +0000 UTC m=+118.792349942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.655044 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" event={"ID":"9202c0b0-32fd-49a9-85ce-98c79744bfcf","Type":"ContainerStarted","Data":"cf01d3114243954908265ad8b003253ff105363639e10f0755c51641c9992267"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.656861 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.671497 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.673047 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7nw9s" event={"ID":"315dcf5a-c0ec-4778-9118-2f68422fcc17","Type":"ContainerStarted","Data":"00bec758e261f671969ddef6294d57d2dbbf202640588907b0110617b99e25e3"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.688318 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-g4flp" podStartSLOduration=98.688298715 podStartE2EDuration="1m38.688298715s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:42.650773412 +0000 UTC m=+118.318865090" watchObservedRunningTime="2026-01-21 09:56:42.688298715 +0000 UTC m=+118.356390393" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.690005 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8224m" podStartSLOduration=9.6899979 podStartE2EDuration="9.6899979s" podCreationTimestamp="2026-01-21 09:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:42.688696035 +0000 UTC m=+118.356787713" watchObservedRunningTime="2026-01-21 09:56:42.6899979 +0000 UTC m=+118.358089578" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.690472 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z" event={"ID":"637fb734-7cb1-46f5-a282-438e701620d5","Type":"ContainerStarted","Data":"aeeebcec59d9223f0136cb8e7ef9801c5832e14833f47f0dd1d5d79e0a8113d7"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.690509 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-j7g7z" event={"ID":"637fb734-7cb1-46f5-a282-438e701620d5","Type":"ContainerStarted","Data":"8cb858a8fdf2a32aa0e5796119d3084f010f29ff6fc3543127197256fe8a3776"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.716097 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" event={"ID":"197acdb1-438c-41ba-8b8d-a78197486cd7","Type":"ContainerStarted","Data":"3ff90a3fa5d4d9146adfe5a8318eb64c49aeec78c324ce351c465ca43135a014"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.716358 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.725900 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:42 crc kubenswrapper[5119]: E0121 09:56:42.728724 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.228708474 +0000 UTC m=+118.896800152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.730556 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" event={"ID":"303b471c-5851-4624-a1c6-5d8f826641b1","Type":"ContainerStarted","Data":"2ab71ba3f0f7c89427839e59c3f734effd48257766d376311a63511c9ced8457"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.734278 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" event={"ID":"984cb670-8e15-4092-bf42-f3c6337e1cad","Type":"ContainerStarted","Data":"b5c56acf6878469fce270733f43e3dc92db6e69432c1d8d223a0049da60ae2b6"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.742661 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4wln"] Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.761983 5119 generic.go:358] "Generic (PLEG): container finished" podID="47512efb-ea0a-42ac-a2c6-fd3017df0ce1" containerID="e101219e4a7e2ac0c07d58264140691317544292ea8b106991f3cef62b7435bc" exitCode=0 Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.762057 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" event={"ID":"47512efb-ea0a-42ac-a2c6-fd3017df0ce1","Type":"ContainerDied","Data":"e101219e4a7e2ac0c07d58264140691317544292ea8b106991f3cef62b7435bc"} Jan 21 09:56:42 crc kubenswrapper[5119]: W0121 09:56:42.771777 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07776dee_a157_4c69_ae94_c63a101a84f2.slice/crio-31f2c489a389f57e68b8247650280e6151340a6054fa0009f288e360a9937958 WatchSource:0}: Error finding container 31f2c489a389f57e68b8247650280e6151340a6054fa0009f288e360a9937958: Status 404 returned error can't find the container with id 31f2c489a389f57e68b8247650280e6151340a6054fa0009f288e360a9937958 Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.796179 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wn7cq" event={"ID":"3f4078de-237b-4252-be46-f0b89d21c8ed","Type":"ContainerStarted","Data":"2dbe03a795a16ceeac08bbd07014ea303104947c6fe8d94027a07ca2f3317166"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.799658 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-dpf2h" podStartSLOduration=98.799638449 podStartE2EDuration="1m38.799638449s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:42.760954907 +0000 UTC m=+118.429046585" watchObservedRunningTime="2026-01-21 09:56:42.799638449 +0000 UTC m=+118.467730127" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.831682 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:42 crc kubenswrapper[5119]: E0121 09:56:42.834025 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.333999778 +0000 UTC m=+119.002091446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.837142 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" podStartSLOduration=98.837127162 podStartE2EDuration="1m38.837127162s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:42.798594912 +0000 UTC m=+118.466686590" watchObservedRunningTime="2026-01-21 09:56:42.837127162 +0000 UTC m=+118.505218840" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.849963 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" event={"ID":"1b25e062-a07b-4350-84c9-9247d3a0c144","Type":"ContainerStarted","Data":"568136f531d74336ec611a9209b539346b8e5a5992c06b522997d5fa7cdf90c9"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.872025 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" event={"ID":"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b","Type":"ContainerStarted","Data":"1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.872974 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.883611 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9nmdb"] Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.895843 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.898892 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" event={"ID":"a8053129-cc10-477e-b44f-52c846d9d1ce","Type":"ContainerStarted","Data":"3555190f98bb93adf5d3e9eced3342f4944bffa31df7baf49b5bd2edf779affb"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.900885 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.907769 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nmdb"] Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.918936 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" event={"ID":"9d29fd5f-ecd0-4624-97b8-5f2d50b70df0","Type":"ContainerStarted","Data":"e70e2e24369147876f71476da18eb94f9c526a3721d990dcdc9301c86c70bb0c"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.923674 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zm2f7"] Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.958176 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" event={"ID":"09592b3a-cb47-43ee-97e7-f058888af3ff","Type":"ContainerStarted","Data":"9ffd5829d32130acc5c42365cdb3a88ff8ce6d74fa3bcd94e99532aa5feaf9cc"} Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.960533 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:42 crc kubenswrapper[5119]: E0121 09:56:42.963467 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.463445847 +0000 UTC m=+119.131537525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:42 crc kubenswrapper[5119]: I0121 09:56:42.997173 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-4rx9w" podStartSLOduration=98.997157458 podStartE2EDuration="1m38.997157458s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:42.959341677 +0000 UTC m=+118.627433355" watchObservedRunningTime="2026-01-21 09:56:42.997157458 +0000 UTC m=+118.665249136" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.020521 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-hdhzp" podStartSLOduration=99.020503972 podStartE2EDuration="1m39.020503972s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:42.996310966 +0000 UTC m=+118.664402634" watchObservedRunningTime="2026-01-21 09:56:43.020503972 +0000 UTC m=+118.688595650" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.021051 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-lpk4t" podStartSLOduration=99.021047417 podStartE2EDuration="1m39.021047417s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:43.019186557 +0000 UTC m=+118.687278235" watchObservedRunningTime="2026-01-21 09:56:43.021047417 +0000 UTC m=+118.689139095" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.027793 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" event={"ID":"986b7816-8325-48f6-b5a5-2d51c9f31687","Type":"ContainerStarted","Data":"485f8dc57b25b8a2ba36a5cfa365ed96aad5692f3fbe3a3f0e90d31dbed0211d"} Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.070371 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.070477 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgxcw\" (UniqueName: \"kubernetes.io/projected/6783a1d3-549e-4077-9898-723d2984e451-kube-api-access-xgxcw\") pod \"redhat-operators-9nmdb\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.070574 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-catalog-content\") pod \"redhat-operators-9nmdb\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.070594 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-utilities\") pod \"redhat-operators-9nmdb\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.070707 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-v7d7l" podStartSLOduration=99.070692443 podStartE2EDuration="1m39.070692443s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:43.046010073 +0000 UTC m=+118.714101751" watchObservedRunningTime="2026-01-21 09:56:43.070692443 +0000 UTC m=+118.738784121" Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.076891 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.576867698 +0000 UTC m=+119.244959376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.078163 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:43 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:43 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:43 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.078216 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.088933 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nwbtj" event={"ID":"b88913b0-a37a-46b4-9c43-4e2e22f306d5","Type":"ContainerStarted","Data":"b50b3d2374d44282d9d4e23ac17120f20961a547d2c371bafdd9a226a3208202"} Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.102233 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" podStartSLOduration=99.102206265 podStartE2EDuration="1m39.102206265s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:43.099884063 +0000 UTC m=+118.767975741" watchObservedRunningTime="2026-01-21 09:56:43.102206265 +0000 UTC m=+118.770297933" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.116427 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxxh" event={"ID":"9bccb111-fc78-420c-bb88-788974b0d7d5","Type":"ContainerStarted","Data":"a2b760901412f181c1a3dc481ff1b0ff91507e3ff8ad6d53e6a932628b6bf771"} Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.116469 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxxh" event={"ID":"9bccb111-fc78-420c-bb88-788974b0d7d5","Type":"ContainerStarted","Data":"a6a15d5a0fb4bc6d9aeba8ffba44cd8888b405ae75e1655106a56e71116c9e11"} Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.131227 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gjxgc" event={"ID":"a1df92f8-e439-4da3-af25-cdbf8374d2da","Type":"ContainerStarted","Data":"83beb52094376e6d2d579268968e2bca40dba9827a908a3ababc8d8779cdabd5"} Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.155143 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" podStartSLOduration=100.155123049 podStartE2EDuration="1m40.155123049s" podCreationTimestamp="2026-01-21 09:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:43.132934096 +0000 UTC m=+118.801025774" watchObservedRunningTime="2026-01-21 09:56:43.155123049 +0000 UTC m=+118.823214737" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.163974 5119 generic.go:358] "Generic (PLEG): container finished" podID="f53b6ab7-e57d-4f85-adef-9a60515f8f1f" containerID="fe499b7174c2bdcf92728788c1ebaa1347a73922495315eb8a10eb6fd6049e8b" exitCode=0 Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.164073 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" event={"ID":"f53b6ab7-e57d-4f85-adef-9a60515f8f1f","Type":"ContainerDied","Data":"fe499b7174c2bdcf92728788c1ebaa1347a73922495315eb8a10eb6fd6049e8b"} Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.172034 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xgxcw\" (UniqueName: \"kubernetes.io/projected/6783a1d3-549e-4077-9898-723d2984e451-kube-api-access-xgxcw\") pod \"redhat-operators-9nmdb\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.172072 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.172200 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-catalog-content\") pod \"redhat-operators-9nmdb\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.172232 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-utilities\") pod \"redhat-operators-9nmdb\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.172902 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.672886074 +0000 UTC m=+119.340977752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.173043 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-catalog-content\") pod \"redhat-operators-9nmdb\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.173250 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-utilities\") pod \"redhat-operators-9nmdb\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.179883 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.180227 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-76tl2" event={"ID":"e8369fc8-80db-4a4f-9928-46a8acdb2128","Type":"ContainerStarted","Data":"3dcee85b4f0279756f42ac759428a53496a633f4d4b778c8f4936a0df35e0692"} Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.201910 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" event={"ID":"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151","Type":"ContainerStarted","Data":"f2978d9abe5f90f85e007be87d1d5fea81e25ebc40c2677099504fb575e11a7b"} Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.203053 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.204740 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgxcw\" (UniqueName: \"kubernetes.io/projected/6783a1d3-549e-4077-9898-723d2984e451-kube-api-access-xgxcw\") pod \"redhat-operators-9nmdb\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.207852 5119 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-z67hs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.207958 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" podUID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.210447 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" event={"ID":"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8","Type":"ContainerStarted","Data":"522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f"} Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.210925 5119 patch_prober.go:28] interesting pod/downloads-747b44746d-prvwt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.211021 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-prvwt" podUID="d91bd19e-1fee-475c-a8ff-ee1014086695" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.213259 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-fs8fn" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.219852 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-jvrpf" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.227794 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.273280 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.273451 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.773434311 +0000 UTC m=+119.441525989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.273689 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.274421 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.774413417 +0000 UTC m=+119.442505095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.284373 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6z5f6"] Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.285460 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" podStartSLOduration=99.285451152 podStartE2EDuration="1m39.285451152s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:43.28236852 +0000 UTC m=+118.950460198" watchObservedRunningTime="2026-01-21 09:56:43.285451152 +0000 UTC m=+118.953542830" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.302317 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.306839 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6z5f6"] Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.377968 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.378279 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.878252372 +0000 UTC m=+119.546344060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.480778 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-utilities\") pod \"redhat-operators-6z5f6\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.481066 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-catalog-content\") pod \"redhat-operators-6z5f6\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.481087 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4hgg\" (UniqueName: \"kubernetes.io/projected/4aa99c79-80c4-41a9-beaa-32c9643971e5-kube-api-access-z4hgg\") pod \"redhat-operators-6z5f6\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.481114 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.481383 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:43.981371437 +0000 UTC m=+119.649463115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.582635 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.582891 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-utilities\") pod \"redhat-operators-6z5f6\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.582920 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-catalog-content\") pod \"redhat-operators-6z5f6\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.582943 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4hgg\" (UniqueName: \"kubernetes.io/projected/4aa99c79-80c4-41a9-beaa-32c9643971e5-kube-api-access-z4hgg\") pod \"redhat-operators-6z5f6\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.583314 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.083298281 +0000 UTC m=+119.751389959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.583996 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-utilities\") pod \"redhat-operators-6z5f6\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.584072 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-catalog-content\") pod \"redhat-operators-6z5f6\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.590691 5119 scope.go:117] "RemoveContainer" containerID="af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.603533 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4hgg\" (UniqueName: \"kubernetes.io/projected/4aa99c79-80c4-41a9-beaa-32c9643971e5-kube-api-access-z4hgg\") pod \"redhat-operators-6z5f6\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.616355 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nmdb"] Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.627511 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.684411 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.684783 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.184770342 +0000 UTC m=+119.852862020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.786187 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.786572 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.286554043 +0000 UTC m=+119.954645721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.890056 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.890653 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.390639154 +0000 UTC m=+120.058730832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.961759 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6z5f6"] Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.990990 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.991202 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.491175791 +0000 UTC m=+120.159267469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:43 crc kubenswrapper[5119]: I0121 09:56:43.991392 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:43 crc kubenswrapper[5119]: E0121 09:56:43.991814 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.491799407 +0000 UTC m=+120.159891085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.063182 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:44 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:44 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:44 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.063239 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.092667 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.092837 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.592804196 +0000 UTC m=+120.260895874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.093187 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.093660 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.593649879 +0000 UTC m=+120.261741557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.194425 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.194779 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.694755881 +0000 UTC m=+120.362847559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.219322 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmdb" event={"ID":"6783a1d3-549e-4077-9898-723d2984e451","Type":"ContainerStarted","Data":"132d1b5afc13e23bff80fbcbeab9b77b73db19f2778c5360db8bc8070b1ff409"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.229367 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" event={"ID":"303b471c-5851-4624-a1c6-5d8f826641b1","Type":"ContainerStarted","Data":"1f47061162469800151e8a05f1c21afdae7a12dcd4d3a8a85ff5ae0293ce20da"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.231483 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" event={"ID":"47512efb-ea0a-42ac-a2c6-fd3017df0ce1","Type":"ContainerStarted","Data":"39cf59a30586c076930cbd0ef00ae320994125c1d43660712182d43afc03b001"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.232789 5119 generic.go:358] "Generic (PLEG): container finished" podID="3f4078de-237b-4252-be46-f0b89d21c8ed" containerID="2dda0b36a0e1728d9f7742117b19a325f3e8250cf03c7ef86dcfcd1d6d17eb3f" exitCode=0 Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.232838 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wn7cq" event={"ID":"3f4078de-237b-4252-be46-f0b89d21c8ed","Type":"ContainerDied","Data":"2dda0b36a0e1728d9f7742117b19a325f3e8250cf03c7ef86dcfcd1d6d17eb3f"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.234082 5119 generic.go:358] "Generic (PLEG): container finished" podID="07776dee-a157-4c69-ae94-c63a101a84f2" containerID="ed62d1924339add48c5dbe2f41b95894222cc02affa7a2b4a7102901a8671a9a" exitCode=0 Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.234117 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4wln" event={"ID":"07776dee-a157-4c69-ae94-c63a101a84f2","Type":"ContainerDied","Data":"ed62d1924339add48c5dbe2f41b95894222cc02affa7a2b4a7102901a8671a9a"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.234151 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4wln" event={"ID":"07776dee-a157-4c69-ae94-c63a101a84f2","Type":"ContainerStarted","Data":"31f2c489a389f57e68b8247650280e6151340a6054fa0009f288e360a9937958"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.237063 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" event={"ID":"1b25e062-a07b-4350-84c9-9247d3a0c144","Type":"ContainerStarted","Data":"c553fe7d640296bebb837d673c456c94feb4c9e16daf17fb65ff9b6a1e2722f7"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.240184 5119 generic.go:358] "Generic (PLEG): container finished" podID="9bccb111-fc78-420c-bb88-788974b0d7d5" containerID="a2b760901412f181c1a3dc481ff1b0ff91507e3ff8ad6d53e6a932628b6bf771" exitCode=0 Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.240271 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxxh" event={"ID":"9bccb111-fc78-420c-bb88-788974b0d7d5","Type":"ContainerDied","Data":"a2b760901412f181c1a3dc481ff1b0ff91507e3ff8ad6d53e6a932628b6bf771"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.241370 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z5f6" event={"ID":"4aa99c79-80c4-41a9-beaa-32c9643971e5","Type":"ContainerStarted","Data":"eb3e9603d31bbfd272c369dcd5bed4cb294630784892ee895101a4150a09890b"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.242458 5119 generic.go:358] "Generic (PLEG): container finished" podID="a90aed0b-2281-4055-843d-678b06d52325" containerID="b71975588fd89225ab1f014a5f733f05eeb18ff718934d72fcfe7806c9811738" exitCode=0 Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.242504 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm2f7" event={"ID":"a90aed0b-2281-4055-843d-678b06d52325","Type":"ContainerDied","Data":"b71975588fd89225ab1f014a5f733f05eeb18ff718934d72fcfe7806c9811738"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.242519 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm2f7" event={"ID":"a90aed0b-2281-4055-843d-678b06d52325","Type":"ContainerStarted","Data":"d8c7c502c61fd9683e0fe4b7062fc7ab35cc5b8d304334e5c2286beaa7e91917"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.245133 5119 generic.go:358] "Generic (PLEG): container finished" podID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" containerID="be3022aafe9ef17c86556fb4e19eff3f45b3a9f1b6808e9dc99e75a28cd36a03" exitCode=0 Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.245673 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgvgn" event={"ID":"eeb3fd25-b829-4977-aac0-aa2539bf13d0","Type":"ContainerDied","Data":"be3022aafe9ef17c86556fb4e19eff3f45b3a9f1b6808e9dc99e75a28cd36a03"} Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.307059 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.307401 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.807389061 +0000 UTC m=+120.475480739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.398416 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" podUID="b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" gracePeriod=30 Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.399367 5119 patch_prober.go:28] interesting pod/downloads-747b44746d-prvwt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.399426 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-prvwt" podUID="d91bd19e-1fee-475c-a8ff-ee1014086695" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.41:8080/\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.401067 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.408818 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.408953 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.908930614 +0000 UTC m=+120.577022292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.409090 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.410160 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:44.910150166 +0000 UTC m=+120.578241844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.426340 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-7855f" podStartSLOduration=100.426326988 podStartE2EDuration="1m40.426326988s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:44.423241156 +0000 UTC m=+120.091332834" watchObservedRunningTime="2026-01-21 09:56:44.426326988 +0000 UTC m=+120.094418666" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.464833 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-9jrsz" podStartSLOduration=100.464812997 podStartE2EDuration="1m40.464812997s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:44.464476988 +0000 UTC m=+120.132568666" watchObservedRunningTime="2026-01-21 09:56:44.464812997 +0000 UTC m=+120.132904685" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.493293 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" podStartSLOduration=100.493274658 podStartE2EDuration="1m40.493274658s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:44.491726717 +0000 UTC m=+120.159818395" watchObservedRunningTime="2026-01-21 09:56:44.493274658 +0000 UTC m=+120.161366336" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.510557 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.512223 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:45.012204044 +0000 UTC m=+120.680295722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.513171 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.513680 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:45.013648422 +0000 UTC m=+120.681740100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.531011 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.615702 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.616844 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:45.116823359 +0000 UTC m=+120.784915037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.666883 5119 ???:1] "http: TLS handshake error from 192.168.126.11:40650: no serving certificate available for the kubelet" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.716924 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-secret-volume\") pod \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.717016 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8qvc\" (UniqueName: \"kubernetes.io/projected/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-kube-api-access-s8qvc\") pod \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.717079 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-config-volume\") pod \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\" (UID: \"f53b6ab7-e57d-4f85-adef-9a60515f8f1f\") " Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.717265 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.717523 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:45.21750954 +0000 UTC m=+120.885601218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.718445 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-config-volume" (OuterVolumeSpecName: "config-volume") pod "f53b6ab7-e57d-4f85-adef-9a60515f8f1f" (UID: "f53b6ab7-e57d-4f85-adef-9a60515f8f1f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.722563 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-kube-api-access-s8qvc" (OuterVolumeSpecName: "kube-api-access-s8qvc") pod "f53b6ab7-e57d-4f85-adef-9a60515f8f1f" (UID: "f53b6ab7-e57d-4f85-adef-9a60515f8f1f"). InnerVolumeSpecName "kube-api-access-s8qvc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.723955 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f53b6ab7-e57d-4f85-adef-9a60515f8f1f" (UID: "f53b6ab7-e57d-4f85-adef-9a60515f8f1f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.817880 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.818333 5119 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.818363 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s8qvc\" (UniqueName: \"kubernetes.io/projected/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-kube-api-access-s8qvc\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.818372 5119 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53b6ab7-e57d-4f85-adef-9a60515f8f1f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.818497 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:45.318457667 +0000 UTC m=+120.986549345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:44 crc kubenswrapper[5119]: I0121 09:56:44.919982 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:44 crc kubenswrapper[5119]: E0121 09:56:44.920320 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:45.420308078 +0000 UTC m=+121.088399756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.020692 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.021033 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:45.52101636 +0000 UTC m=+121.189108038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.062451 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:45 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:45 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:45 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.062506 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.122806 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.123166 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:45.6231512 +0000 UTC m=+121.291242968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.224406 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.224691 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:45.724676082 +0000 UTC m=+121.392767760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.255114 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" event={"ID":"7cdaf693-5dea-4260-bb45-209fcd54b53e","Type":"ContainerStarted","Data":"d8ee99430ecd2cea7486c41d548504677dc53bbee91a32a503f161134d341f45"} Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.258372 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.260733 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291"} Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.262083 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.264193 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" event={"ID":"f53b6ab7-e57d-4f85-adef-9a60515f8f1f","Type":"ContainerDied","Data":"304cfd66cc9dd19e059433737eac2672837212cf9dc77dde1d44c8e2afc7e3a8"} Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.264232 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="304cfd66cc9dd19e059433737eac2672837212cf9dc77dde1d44c8e2afc7e3a8" Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.264267 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm" Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.266749 5119 generic.go:358] "Generic (PLEG): container finished" podID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerID="1e64d330d2b30a41021ba0bdf92675ab5e83830d9985b9409573304ac6eef7d6" exitCode=0 Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.266874 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z5f6" event={"ID":"4aa99c79-80c4-41a9-beaa-32c9643971e5","Type":"ContainerDied","Data":"1e64d330d2b30a41021ba0bdf92675ab5e83830d9985b9409573304ac6eef7d6"} Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.270252 5119 generic.go:358] "Generic (PLEG): container finished" podID="6783a1d3-549e-4077-9898-723d2984e451" containerID="5a5ebe736f916df53e36b4219a57af97cc83e9c8f259135cd06e97d45b428d30" exitCode=0 Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.270365 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmdb" event={"ID":"6783a1d3-549e-4077-9898-723d2984e451","Type":"ContainerDied","Data":"5a5ebe736f916df53e36b4219a57af97cc83e9c8f259135cd06e97d45b428d30"} Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.284201 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.284183852 podStartE2EDuration="26.284183852s" podCreationTimestamp="2026-01-21 09:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:45.281746538 +0000 UTC m=+120.949838216" watchObservedRunningTime="2026-01-21 09:56:45.284183852 +0000 UTC m=+120.952275520" Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.346585 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.350325 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:45.85031158 +0000 UTC m=+121.518403258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.448103 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.448317 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:45.948298038 +0000 UTC m=+121.616389716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.551361 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.551765 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.051712722 +0000 UTC m=+121.719804400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.652384 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.652592 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.152550836 +0000 UTC m=+121.820642514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.652928 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.653274 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.153259445 +0000 UTC m=+121.821351123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.754224 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.754777 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.254760167 +0000 UTC m=+121.922851845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.856015 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.856356 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.356344632 +0000 UTC m=+122.024436310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.957144 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.957302 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.457275349 +0000 UTC m=+122.125367027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:45 crc kubenswrapper[5119]: I0121 09:56:45.957560 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:45 crc kubenswrapper[5119]: E0121 09:56:45.957886 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.457871175 +0000 UTC m=+122.125962843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.020830 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.021550 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f53b6ab7-e57d-4f85-adef-9a60515f8f1f" containerName="collect-profiles" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.021572 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53b6ab7-e57d-4f85-adef-9a60515f8f1f" containerName="collect-profiles" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.021682 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="f53b6ab7-e57d-4f85-adef-9a60515f8f1f" containerName="collect-profiles" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.030515 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.031594 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.032417 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.034089 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.059011 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.059211 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a860d3c-104e-4e29-8722-e6b4c5389f33-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"9a860d3c-104e-4e29-8722-e6b4c5389f33\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.059267 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a860d3c-104e-4e29-8722-e6b4c5389f33-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"9a860d3c-104e-4e29-8722-e6b4c5389f33\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.059355 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.559329946 +0000 UTC m=+122.227421624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.063260 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:46 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:46 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:46 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.063309 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.160493 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a860d3c-104e-4e29-8722-e6b4c5389f33-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"9a860d3c-104e-4e29-8722-e6b4c5389f33\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.160563 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.160592 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a860d3c-104e-4e29-8722-e6b4c5389f33-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"9a860d3c-104e-4e29-8722-e6b4c5389f33\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.160692 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a860d3c-104e-4e29-8722-e6b4c5389f33-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"9a860d3c-104e-4e29-8722-e6b4c5389f33\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.161270 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.661258279 +0000 UTC m=+122.329349957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.180141 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a860d3c-104e-4e29-8722-e6b4c5389f33-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"9a860d3c-104e-4e29-8722-e6b4c5389f33\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.262098 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.262341 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.762326571 +0000 UTC m=+122.430418249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.354858 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.364424 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.364897 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.864876161 +0000 UTC m=+122.532967839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.405020 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.405092 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.412171 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.465057 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.466173 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:46.966134346 +0000 UTC m=+122.634226024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.568465 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.569119 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.069107418 +0000 UTC m=+122.737199086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.678485 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.679879 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.179856818 +0000 UTC m=+122.847948506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.768423 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.768561 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.769851 5119 patch_prober.go:28] interesting pod/console-64d44f6ddf-8nd58 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.769917 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-8nd58" podUID="fab52538-cd8b-408e-b571-f2dc516dc2a3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.781635 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.782384 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.282233024 +0000 UTC m=+122.950324702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.853807 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.871947 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.871997 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.882425 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.882499 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.382480773 +0000 UTC m=+123.050572451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.882644 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.882704 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.884100 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.384079225 +0000 UTC m=+123.052171003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.983829 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.984023 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.483979584 +0000 UTC m=+123.152071262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:46 crc kubenswrapper[5119]: I0121 09:56:46.984895 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:46 crc kubenswrapper[5119]: E0121 09:56:46.985378 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.485368442 +0000 UTC m=+123.153460120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.059200 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.063103 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:47 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:47 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:47 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.063159 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.086386 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:47 crc kubenswrapper[5119]: E0121 09:56:47.086591 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.586556196 +0000 UTC m=+123.254647874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.086728 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:47 crc kubenswrapper[5119]: E0121 09:56:47.087044 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.587031768 +0000 UTC m=+123.255123446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.087217 5119 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.188058 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:47 crc kubenswrapper[5119]: E0121 09:56:47.188242 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.688201842 +0000 UTC m=+123.356293520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.288523 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"9a860d3c-104e-4e29-8722-e6b4c5389f33","Type":"ContainerStarted","Data":"b5f129451e57bdb666d7f9be2b7c4f960e926f83c81f45cd1956e8f082e6c888"} Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.289352 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:47 crc kubenswrapper[5119]: E0121 09:56:47.289730 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.789714924 +0000 UTC m=+123.457806602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.303356 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" event={"ID":"7cdaf693-5dea-4260-bb45-209fcd54b53e","Type":"ContainerStarted","Data":"049d1be8ce188ee1a09050722e593ae5e07bf64e924765a837cb313574424248"} Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.307757 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-wrb86" Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.308286 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2glqc" Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.390880 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:47 crc kubenswrapper[5119]: E0121 09:56:47.391728 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.89170875 +0000 UTC m=+123.559800428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.492038 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:47 crc kubenswrapper[5119]: E0121 09:56:47.492426 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:47.992409841 +0000 UTC m=+123.660501509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.593700 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:47 crc kubenswrapper[5119]: E0121 09:56:47.593854 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:56:48.093839791 +0000 UTC m=+123.761931469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.695639 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:47 crc kubenswrapper[5119]: E0121 09:56:47.695949 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:56:48.195936299 +0000 UTC m=+123.864027977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-94gcl" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.780378 5119 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T09:56:47.087238544Z","UUID":"b5e214ad-dead-4d3f-a96b-dd4b0dc4d9be","Handler":null,"Name":"","Endpoint":""} Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.795165 5119 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.795550 5119 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.796443 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.802827 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.898539 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.901195 5119 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.901231 5119 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:47 crc kubenswrapper[5119]: I0121 09:56:47.927550 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-94gcl\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:48 crc kubenswrapper[5119]: I0121 09:56:48.065274 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:48 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:48 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:48 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:48 crc kubenswrapper[5119]: I0121 09:56:48.065344 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:48 crc kubenswrapper[5119]: I0121 09:56:48.119268 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 21 09:56:48 crc kubenswrapper[5119]: I0121 09:56:48.129259 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:56:48 crc kubenswrapper[5119]: I0121 09:56:48.320736 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" event={"ID":"7cdaf693-5dea-4260-bb45-209fcd54b53e","Type":"ContainerStarted","Data":"17cfe0c2f8c51162c2d93accec8c1b1986ab3338b5e5b90b222e7e464d583bee"} Jan 21 09:56:48 crc kubenswrapper[5119]: I0121 09:56:48.320776 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" event={"ID":"7cdaf693-5dea-4260-bb45-209fcd54b53e","Type":"ContainerStarted","Data":"a3b244375baa6ed78269980e887b02e296234469727a27efaf6e20d5ed185b62"} Jan 21 09:56:48 crc kubenswrapper[5119]: I0121 09:56:48.323597 5119 generic.go:358] "Generic (PLEG): container finished" podID="9a860d3c-104e-4e29-8722-e6b4c5389f33" containerID="f08ec370c0199324ce31f5c9ef292ae9fb9b8510da9deddc7a11ca707cf93090" exitCode=0 Jan 21 09:56:48 crc kubenswrapper[5119]: I0121 09:56:48.323973 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"9a860d3c-104e-4e29-8722-e6b4c5389f33","Type":"ContainerDied","Data":"f08ec370c0199324ce31f5c9ef292ae9fb9b8510da9deddc7a11ca707cf93090"} Jan 21 09:56:48 crc kubenswrapper[5119]: I0121 09:56:48.340526 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qpbqz" podStartSLOduration=15.340509333 podStartE2EDuration="15.340509333s" podCreationTimestamp="2026-01-21 09:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:56:48.33585718 +0000 UTC m=+124.003948868" watchObservedRunningTime="2026-01-21 09:56:48.340509333 +0000 UTC m=+124.008601011" Jan 21 09:56:48 crc kubenswrapper[5119]: I0121 09:56:48.601579 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 21 09:56:49 crc kubenswrapper[5119]: I0121 09:56:49.063398 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:49 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:49 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:49 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:49 crc kubenswrapper[5119]: I0121 09:56:49.063476 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:49 crc kubenswrapper[5119]: I0121 09:56:49.629726 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 09:56:49 crc kubenswrapper[5119]: I0121 09:56:49.824252 5119 ???:1] "http: TLS handshake error from 192.168.126.11:40664: no serving certificate available for the kubelet" Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.062174 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:50 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:50 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:50 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.062263 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.818410 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8224m" Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.818448 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.821505 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.826796 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.827019 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.895289 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.895345 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.996563 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.996622 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:56:50 crc kubenswrapper[5119]: I0121 09:56:50.996993 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:56:51 crc kubenswrapper[5119]: I0121 09:56:51.041335 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:56:51 crc kubenswrapper[5119]: I0121 09:56:51.063818 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:51 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:51 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:51 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:51 crc kubenswrapper[5119]: I0121 09:56:51.063884 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:51 crc kubenswrapper[5119]: E0121 09:56:51.125331 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:56:51 crc kubenswrapper[5119]: E0121 09:56:51.129915 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:56:51 crc kubenswrapper[5119]: E0121 09:56:51.133436 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:56:51 crc kubenswrapper[5119]: E0121 09:56:51.133483 5119 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" podUID="b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 09:56:51 crc kubenswrapper[5119]: I0121 09:56:51.138968 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:56:51 crc kubenswrapper[5119]: I0121 09:56:51.779238 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:56:51 crc kubenswrapper[5119]: I0121 09:56:51.913845 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a860d3c-104e-4e29-8722-e6b4c5389f33-kube-api-access\") pod \"9a860d3c-104e-4e29-8722-e6b4c5389f33\" (UID: \"9a860d3c-104e-4e29-8722-e6b4c5389f33\") " Jan 21 09:56:51 crc kubenswrapper[5119]: I0121 09:56:51.914034 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a860d3c-104e-4e29-8722-e6b4c5389f33-kubelet-dir\") pod \"9a860d3c-104e-4e29-8722-e6b4c5389f33\" (UID: \"9a860d3c-104e-4e29-8722-e6b4c5389f33\") " Jan 21 09:56:51 crc kubenswrapper[5119]: I0121 09:56:51.914246 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a860d3c-104e-4e29-8722-e6b4c5389f33-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9a860d3c-104e-4e29-8722-e6b4c5389f33" (UID: "9a860d3c-104e-4e29-8722-e6b4c5389f33"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:56:51 crc kubenswrapper[5119]: I0121 09:56:51.918678 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a860d3c-104e-4e29-8722-e6b4c5389f33-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9a860d3c-104e-4e29-8722-e6b4c5389f33" (UID: "9a860d3c-104e-4e29-8722-e6b4c5389f33"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:52 crc kubenswrapper[5119]: I0121 09:56:52.015796 5119 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a860d3c-104e-4e29-8722-e6b4c5389f33-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:52 crc kubenswrapper[5119]: I0121 09:56:52.015831 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a860d3c-104e-4e29-8722-e6b4c5389f33-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:52 crc kubenswrapper[5119]: I0121 09:56:52.125545 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:52 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:52 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:52 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:52 crc kubenswrapper[5119]: I0121 09:56:52.125634 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:52 crc kubenswrapper[5119]: I0121 09:56:52.343941 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"9a860d3c-104e-4e29-8722-e6b4c5389f33","Type":"ContainerDied","Data":"b5f129451e57bdb666d7f9be2b7c4f960e926f83c81f45cd1956e8f082e6c888"} Jan 21 09:56:52 crc kubenswrapper[5119]: I0121 09:56:52.344228 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f129451e57bdb666d7f9be2b7c4f960e926f83c81f45cd1956e8f082e6c888" Jan 21 09:56:52 crc kubenswrapper[5119]: I0121 09:56:52.344303 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:56:53 crc kubenswrapper[5119]: I0121 09:56:53.062081 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:53 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:53 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:53 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:53 crc kubenswrapper[5119]: I0121 09:56:53.062163 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:54 crc kubenswrapper[5119]: I0121 09:56:54.061782 5119 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-cn8mz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:56:54 crc kubenswrapper[5119]: [-]has-synced failed: reason withheld Jan 21 09:56:54 crc kubenswrapper[5119]: [+]process-running ok Jan 21 09:56:54 crc kubenswrapper[5119]: healthz check failed Jan 21 09:56:54 crc kubenswrapper[5119]: I0121 09:56:54.062102 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" podUID="333793e1-92de-4fbe-83b5-26b64848c6af" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:56:54 crc kubenswrapper[5119]: I0121 09:56:54.418985 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-prvwt" Jan 21 09:56:55 crc kubenswrapper[5119]: I0121 09:56:55.062249 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:55 crc kubenswrapper[5119]: I0121 09:56:55.066164 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-cn8mz" Jan 21 09:56:56 crc kubenswrapper[5119]: I0121 09:56:56.282888 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:56:56 crc kubenswrapper[5119]: I0121 09:56:56.766527 5119 patch_prober.go:28] interesting pod/console-64d44f6ddf-8nd58 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 21 09:56:56 crc kubenswrapper[5119]: I0121 09:56:56.766636 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-8nd58" podUID="fab52538-cd8b-408e-b571-f2dc516dc2a3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 21 09:57:00 crc kubenswrapper[5119]: I0121 09:57:00.063211 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 09:57:00 crc kubenswrapper[5119]: I0121 09:57:00.096208 5119 ???:1] "http: TLS handshake error from 192.168.126.11:39570: no serving certificate available for the kubelet" Jan 21 09:57:01 crc kubenswrapper[5119]: E0121 09:57:01.121873 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:57:01 crc kubenswrapper[5119]: E0121 09:57:01.123395 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:57:01 crc kubenswrapper[5119]: E0121 09:57:01.125193 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:57:01 crc kubenswrapper[5119]: E0121 09:57:01.125258 5119 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" podUID="b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 09:57:06 crc kubenswrapper[5119]: I0121 09:57:06.807475 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:57:06 crc kubenswrapper[5119]: I0121 09:57:06.811980 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-8nd58" Jan 21 09:57:07 crc kubenswrapper[5119]: I0121 09:57:07.157721 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.497249 5119 generic.go:358] "Generic (PLEG): container finished" podID="3f4078de-237b-4252-be46-f0b89d21c8ed" containerID="c343b7cdea6e599a63dcb11499b382afbaf4fca4a57a34514e5361dd9b3ba5dd" exitCode=0 Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.497320 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wn7cq" event={"ID":"3f4078de-237b-4252-be46-f0b89d21c8ed","Type":"ContainerDied","Data":"c343b7cdea6e599a63dcb11499b382afbaf4fca4a57a34514e5361dd9b3ba5dd"} Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.502436 5119 generic.go:358] "Generic (PLEG): container finished" podID="07776dee-a157-4c69-ae94-c63a101a84f2" containerID="60b8efe091becd04afa18c2bfb6b75ed99d5ccf66070f83f0e5824e86bc2a07a" exitCode=0 Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.502496 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4wln" event={"ID":"07776dee-a157-4c69-ae94-c63a101a84f2","Type":"ContainerDied","Data":"60b8efe091becd04afa18c2bfb6b75ed99d5ccf66070f83f0e5824e86bc2a07a"} Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.505117 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxxh" event={"ID":"9bccb111-fc78-420c-bb88-788974b0d7d5","Type":"ContainerStarted","Data":"85f037ab7a03b908500fa78f0dc848eaa29467d2f55de9b2282c96b68f41b908"} Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.507422 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cjs" event={"ID":"49467157-6fc6-4f0b-b833-1b95a6068d7e","Type":"ContainerStarted","Data":"d11d9a2abf3ee6a1d57e2bc700370b8d9789db08d1b83edd743f862fd93867bb"} Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.509764 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z5f6" event={"ID":"4aa99c79-80c4-41a9-beaa-32c9643971e5","Type":"ContainerStarted","Data":"6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d"} Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.512050 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm2f7" event={"ID":"a90aed0b-2281-4055-843d-678b06d52325","Type":"ContainerStarted","Data":"91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002"} Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.514427 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgvgn" event={"ID":"eeb3fd25-b829-4977-aac0-aa2539bf13d0","Type":"ContainerStarted","Data":"37049f1013f9fec4c633ca41d61b28ee1f5c3a275eb495f20f500c1dafe8255e"} Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.515807 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmdb" event={"ID":"6783a1d3-549e-4077-9898-723d2984e451","Type":"ContainerStarted","Data":"f0bca505eb9d5f3462a3843d1bb088cde07d143b82a1065cc24822eeaba09d34"} Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.524873 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 09:57:08 crc kubenswrapper[5119]: I0121 09:57:08.605581 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-94gcl"] Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.524320 5119 generic.go:358] "Generic (PLEG): container finished" podID="9bccb111-fc78-420c-bb88-788974b0d7d5" containerID="85f037ab7a03b908500fa78f0dc848eaa29467d2f55de9b2282c96b68f41b908" exitCode=0 Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.524432 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxxh" event={"ID":"9bccb111-fc78-420c-bb88-788974b0d7d5","Type":"ContainerDied","Data":"85f037ab7a03b908500fa78f0dc848eaa29467d2f55de9b2282c96b68f41b908"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.526732 5119 generic.go:358] "Generic (PLEG): container finished" podID="49467157-6fc6-4f0b-b833-1b95a6068d7e" containerID="d11d9a2abf3ee6a1d57e2bc700370b8d9789db08d1b83edd743f862fd93867bb" exitCode=0 Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.526800 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cjs" event={"ID":"49467157-6fc6-4f0b-b833-1b95a6068d7e","Type":"ContainerDied","Data":"d11d9a2abf3ee6a1d57e2bc700370b8d9789db08d1b83edd743f862fd93867bb"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.530843 5119 generic.go:358] "Generic (PLEG): container finished" podID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerID="6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d" exitCode=0 Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.530864 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z5f6" event={"ID":"4aa99c79-80c4-41a9-beaa-32c9643971e5","Type":"ContainerDied","Data":"6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.535651 5119 generic.go:358] "Generic (PLEG): container finished" podID="a90aed0b-2281-4055-843d-678b06d52325" containerID="91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002" exitCode=0 Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.535716 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm2f7" event={"ID":"a90aed0b-2281-4055-843d-678b06d52325","Type":"ContainerDied","Data":"91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.539714 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" event={"ID":"7c21f56d-7f02-4bb3-bc7e-82b4d990e381","Type":"ContainerStarted","Data":"0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.539837 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" event={"ID":"7c21f56d-7f02-4bb3-bc7e-82b4d990e381","Type":"ContainerStarted","Data":"387349e9b44a21dbfd81e00c4de987153a2d13f74f2cb776de7e06c8afe54a4c"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.541541 5119 generic.go:358] "Generic (PLEG): container finished" podID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" containerID="37049f1013f9fec4c633ca41d61b28ee1f5c3a275eb495f20f500c1dafe8255e" exitCode=0 Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.541653 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgvgn" event={"ID":"eeb3fd25-b829-4977-aac0-aa2539bf13d0","Type":"ContainerDied","Data":"37049f1013f9fec4c633ca41d61b28ee1f5c3a275eb495f20f500c1dafe8255e"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.544268 5119 generic.go:358] "Generic (PLEG): container finished" podID="6783a1d3-549e-4077-9898-723d2984e451" containerID="f0bca505eb9d5f3462a3843d1bb088cde07d143b82a1065cc24822eeaba09d34" exitCode=0 Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.544388 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmdb" event={"ID":"6783a1d3-549e-4077-9898-723d2984e451","Type":"ContainerDied","Data":"f0bca505eb9d5f3462a3843d1bb088cde07d143b82a1065cc24822eeaba09d34"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.544970 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.550799 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wn7cq" event={"ID":"3f4078de-237b-4252-be46-f0b89d21c8ed","Type":"ContainerStarted","Data":"61439cea4b2bc6c759e3fd32e34aa39bea786ec65207e31cd272c9facd9858fe"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.564023 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4wln" event={"ID":"07776dee-a157-4c69-ae94-c63a101a84f2","Type":"ContainerStarted","Data":"4a490e997495f494203b80cf802d7aafd4cf0eded86c24f57ca9373e175c06e9"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.571743 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3","Type":"ContainerStarted","Data":"f4a9d0e37fb9c95f824c825fbe7119b4dd576d46ae94e7fb3a26b8faa96be80f"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.571801 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3","Type":"ContainerStarted","Data":"341d515ba0261f55f23e27e0a3d2a55c8e711d0e936a14193dcb64f6ce4b6315"} Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.592848 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" podStartSLOduration=125.592830478 podStartE2EDuration="2m5.592830478s" podCreationTimestamp="2026-01-21 09:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:57:09.588245685 +0000 UTC m=+145.256337383" watchObservedRunningTime="2026-01-21 09:57:09.592830478 +0000 UTC m=+145.260922156" Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.695115 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j4wln" podStartSLOduration=4.813119507 podStartE2EDuration="28.69509383s" podCreationTimestamp="2026-01-21 09:56:41 +0000 UTC" firstStartedPulling="2026-01-21 09:56:44.236881907 +0000 UTC m=+119.904973585" lastFinishedPulling="2026-01-21 09:57:08.11885623 +0000 UTC m=+143.786947908" observedRunningTime="2026-01-21 09:57:09.691679939 +0000 UTC m=+145.359771617" watchObservedRunningTime="2026-01-21 09:57:09.69509383 +0000 UTC m=+145.363185508" Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.731656 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wn7cq" podStartSLOduration=5.44217342 podStartE2EDuration="30.731639096s" podCreationTimestamp="2026-01-21 09:56:39 +0000 UTC" firstStartedPulling="2026-01-21 09:56:42.796069924 +0000 UTC m=+118.464161602" lastFinishedPulling="2026-01-21 09:57:08.08553561 +0000 UTC m=+143.753627278" observedRunningTime="2026-01-21 09:57:09.729469089 +0000 UTC m=+145.397560767" watchObservedRunningTime="2026-01-21 09:57:09.731639096 +0000 UTC m=+145.399730764" Jan 21 09:57:09 crc kubenswrapper[5119]: I0121 09:57:09.731949 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=20.731943584 podStartE2EDuration="20.731943584s" podCreationTimestamp="2026-01-21 09:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:57:09.704189554 +0000 UTC m=+145.372281232" watchObservedRunningTime="2026-01-21 09:57:09.731943584 +0000 UTC m=+145.400035262" Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.298789 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.298874 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.358147 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.578017 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxxh" event={"ID":"9bccb111-fc78-420c-bb88-788974b0d7d5","Type":"ContainerStarted","Data":"b3074768bedee2cfb8e3b551c1bb3fb78486f073ff0a66549be988a2d905b70a"} Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.581385 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cjs" event={"ID":"49467157-6fc6-4f0b-b833-1b95a6068d7e","Type":"ContainerStarted","Data":"0a0f44308b64040e047bc42fe33b01e052a666bbfb72e0335010ac1b947ae5bc"} Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.583704 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z5f6" event={"ID":"4aa99c79-80c4-41a9-beaa-32c9643971e5","Type":"ContainerStarted","Data":"11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210"} Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.585958 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm2f7" event={"ID":"a90aed0b-2281-4055-843d-678b06d52325","Type":"ContainerStarted","Data":"5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b"} Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.588209 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgvgn" event={"ID":"eeb3fd25-b829-4977-aac0-aa2539bf13d0","Type":"ContainerStarted","Data":"303019e502b4e5dbcde8883528ba993026c2840cf2441b90cde2966207de8c9c"} Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.591667 5119 generic.go:358] "Generic (PLEG): container finished" podID="7a7b2f24-fc3a-4e6a-92a8-890523c80ce3" containerID="f4a9d0e37fb9c95f824c825fbe7119b4dd576d46ae94e7fb3a26b8faa96be80f" exitCode=0 Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.596347 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmdb" event={"ID":"6783a1d3-549e-4077-9898-723d2984e451","Type":"ContainerStarted","Data":"4f713b6d31a4cc4ab01666fdccc55d242f87aef51b86b5be3206bdfcc800b9e2"} Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.596486 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3","Type":"ContainerDied","Data":"f4a9d0e37fb9c95f824c825fbe7119b4dd576d46ae94e7fb3a26b8faa96be80f"} Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.597819 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9gxxh" podStartSLOduration=6.583798189 podStartE2EDuration="31.597809693s" podCreationTimestamp="2026-01-21 09:56:39 +0000 UTC" firstStartedPulling="2026-01-21 09:56:43.117196926 +0000 UTC m=+118.785288604" lastFinishedPulling="2026-01-21 09:57:08.13120843 +0000 UTC m=+143.799300108" observedRunningTime="2026-01-21 09:57:10.597338469 +0000 UTC m=+146.265430147" watchObservedRunningTime="2026-01-21 09:57:10.597809693 +0000 UTC m=+146.265901371" Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.631302 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6z5f6" podStartSLOduration=4.684086793 podStartE2EDuration="27.631277726s" podCreationTimestamp="2026-01-21 09:56:43 +0000 UTC" firstStartedPulling="2026-01-21 09:56:45.268038252 +0000 UTC m=+120.936129930" lastFinishedPulling="2026-01-21 09:57:08.215229185 +0000 UTC m=+143.883320863" observedRunningTime="2026-01-21 09:57:10.627241029 +0000 UTC m=+146.295332707" watchObservedRunningTime="2026-01-21 09:57:10.631277726 +0000 UTC m=+146.299369424" Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.681555 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zm2f7" podStartSLOduration=4.8391910540000005 podStartE2EDuration="28.681538419s" podCreationTimestamp="2026-01-21 09:56:42 +0000 UTC" firstStartedPulling="2026-01-21 09:56:44.243218756 +0000 UTC m=+119.911310434" lastFinishedPulling="2026-01-21 09:57:08.085566121 +0000 UTC m=+143.753657799" observedRunningTime="2026-01-21 09:57:10.680360968 +0000 UTC m=+146.348452646" watchObservedRunningTime="2026-01-21 09:57:10.681538419 +0000 UTC m=+146.349630097" Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.698894 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cgvgn" podStartSLOduration=6.859486368 podStartE2EDuration="30.698877633s" podCreationTimestamp="2026-01-21 09:56:40 +0000 UTC" firstStartedPulling="2026-01-21 09:56:44.246171155 +0000 UTC m=+119.914262833" lastFinishedPulling="2026-01-21 09:57:08.08556242 +0000 UTC m=+143.753654098" observedRunningTime="2026-01-21 09:57:10.698456811 +0000 UTC m=+146.366548499" watchObservedRunningTime="2026-01-21 09:57:10.698877633 +0000 UTC m=+146.366969301" Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.718228 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9nmdb" podStartSLOduration=5.858095463 podStartE2EDuration="28.718213929s" podCreationTimestamp="2026-01-21 09:56:42 +0000 UTC" firstStartedPulling="2026-01-21 09:56:45.272324346 +0000 UTC m=+120.940416024" lastFinishedPulling="2026-01-21 09:57:08.132442812 +0000 UTC m=+143.800534490" observedRunningTime="2026-01-21 09:57:10.716599537 +0000 UTC m=+146.384691215" watchObservedRunningTime="2026-01-21 09:57:10.718213929 +0000 UTC m=+146.386305607" Jan 21 09:57:10 crc kubenswrapper[5119]: I0121 09:57:10.739985 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w7cjs" podStartSLOduration=6.160307881 podStartE2EDuration="31.739967061s" podCreationTimestamp="2026-01-21 09:56:39 +0000 UTC" firstStartedPulling="2026-01-21 09:56:42.552145456 +0000 UTC m=+118.220237134" lastFinishedPulling="2026-01-21 09:57:08.131804636 +0000 UTC m=+143.799896314" observedRunningTime="2026-01-21 09:57:10.735994485 +0000 UTC m=+146.404086163" watchObservedRunningTime="2026-01-21 09:57:10.739967061 +0000 UTC m=+146.408058749" Jan 21 09:57:11 crc kubenswrapper[5119]: E0121 09:57:11.123029 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:57:11 crc kubenswrapper[5119]: E0121 09:57:11.126234 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:57:11 crc kubenswrapper[5119]: E0121 09:57:11.128215 5119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:57:11 crc kubenswrapper[5119]: E0121 09:57:11.128279 5119 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" podUID="b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 09:57:11 crc kubenswrapper[5119]: I0121 09:57:11.982746 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.090583 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.090650 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.097310 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kube-api-access\") pod \"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3\" (UID: \"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3\") " Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.097474 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kubelet-dir\") pod \"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3\" (UID: \"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3\") " Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.097875 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7a7b2f24-fc3a-4e6a-92a8-890523c80ce3" (UID: "7a7b2f24-fc3a-4e6a-92a8-890523c80ce3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.104964 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7a7b2f24-fc3a-4e6a-92a8-890523c80ce3" (UID: "7a7b2f24-fc3a-4e6a-92a8-890523c80ce3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.134129 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.199728 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.200240 5119 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a7b2f24-fc3a-4e6a-92a8-890523c80ce3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.455746 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.455872 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.491380 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.615386 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"7a7b2f24-fc3a-4e6a-92a8-890523c80ce3","Type":"ContainerDied","Data":"341d515ba0261f55f23e27e0a3d2a55c8e711d0e936a14193dcb64f6ce4b6315"} Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.615467 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="341d515ba0261f55f23e27e0a3d2a55c8e711d0e936a14193dcb64f6ce4b6315" Jan 21 09:57:12 crc kubenswrapper[5119]: I0121 09:57:12.615648 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:57:13 crc kubenswrapper[5119]: I0121 09:57:13.228454 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:57:13 crc kubenswrapper[5119]: I0121 09:57:13.228501 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:57:13 crc kubenswrapper[5119]: I0121 09:57:13.629081 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:57:13 crc kubenswrapper[5119]: I0121 09:57:13.629133 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:57:14 crc kubenswrapper[5119]: I0121 09:57:14.269084 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9nmdb" podUID="6783a1d3-549e-4077-9898-723d2984e451" containerName="registry-server" probeResult="failure" output=< Jan 21 09:57:14 crc kubenswrapper[5119]: timeout: failed to connect service ":50051" within 1s Jan 21 09:57:14 crc kubenswrapper[5119]: > Jan 21 09:57:14 crc kubenswrapper[5119]: I0121 09:57:14.404201 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-5b4c8" Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422043 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeeb3fd25_b829_4977_aac0_aa2539bf13d0.slice/crio-conmon-37049f1013f9fec4c633ca41d61b28ee1f5c3a275eb495f20f500c1dafe8255e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeeb3fd25_b829_4977_aac0_aa2539bf13d0.slice/crio-conmon-37049f1013f9fec4c633ca41d61b28ee1f5c3a275eb495f20f500c1dafe8255e.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422136 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4078de_237b_4252_be46_f0b89d21c8ed.slice/crio-conmon-c343b7cdea6e599a63dcb11499b382afbaf4fca4a57a34514e5361dd9b3ba5dd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4078de_237b_4252_be46_f0b89d21c8ed.slice/crio-conmon-c343b7cdea6e599a63dcb11499b382afbaf4fca4a57a34514e5361dd9b3ba5dd.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422161 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeeb3fd25_b829_4977_aac0_aa2539bf13d0.slice/crio-37049f1013f9fec4c633ca41d61b28ee1f5c3a275eb495f20f500c1dafe8255e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeeb3fd25_b829_4977_aac0_aa2539bf13d0.slice/crio-37049f1013f9fec4c633ca41d61b28ee1f5c3a275eb495f20f500c1dafe8255e.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422186 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07776dee_a157_4c69_ae94_c63a101a84f2.slice/crio-conmon-60b8efe091becd04afa18c2bfb6b75ed99d5ccf66070f83f0e5824e86bc2a07a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07776dee_a157_4c69_ae94_c63a101a84f2.slice/crio-conmon-60b8efe091becd04afa18c2bfb6b75ed99d5ccf66070f83f0e5824e86bc2a07a.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422206 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4078de_237b_4252_be46_f0b89d21c8ed.slice/crio-c343b7cdea6e599a63dcb11499b382afbaf4fca4a57a34514e5361dd9b3ba5dd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4078de_237b_4252_be46_f0b89d21c8ed.slice/crio-c343b7cdea6e599a63dcb11499b382afbaf4fca4a57a34514e5361dd9b3ba5dd.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422231 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bccb111_fc78_420c_bb88_788974b0d7d5.slice/crio-conmon-85f037ab7a03b908500fa78f0dc848eaa29467d2f55de9b2282c96b68f41b908.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bccb111_fc78_420c_bb88_788974b0d7d5.slice/crio-conmon-85f037ab7a03b908500fa78f0dc848eaa29467d2f55de9b2282c96b68f41b908.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422252 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49467157_6fc6_4f0b_b833_1b95a6068d7e.slice/crio-conmon-d11d9a2abf3ee6a1d57e2bc700370b8d9789db08d1b83edd743f862fd93867bb.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49467157_6fc6_4f0b_b833_1b95a6068d7e.slice/crio-conmon-d11d9a2abf3ee6a1d57e2bc700370b8d9789db08d1b83edd743f862fd93867bb.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422275 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6783a1d3_549e_4077_9898_723d2984e451.slice/crio-conmon-f0bca505eb9d5f3462a3843d1bb088cde07d143b82a1065cc24822eeaba09d34.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6783a1d3_549e_4077_9898_723d2984e451.slice/crio-conmon-f0bca505eb9d5f3462a3843d1bb088cde07d143b82a1065cc24822eeaba09d34.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422296 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07776dee_a157_4c69_ae94_c63a101a84f2.slice/crio-60b8efe091becd04afa18c2bfb6b75ed99d5ccf66070f83f0e5824e86bc2a07a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07776dee_a157_4c69_ae94_c63a101a84f2.slice/crio-60b8efe091becd04afa18c2bfb6b75ed99d5ccf66070f83f0e5824e86bc2a07a.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422321 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bccb111_fc78_420c_bb88_788974b0d7d5.slice/crio-85f037ab7a03b908500fa78f0dc848eaa29467d2f55de9b2282c96b68f41b908.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bccb111_fc78_420c_bb88_788974b0d7d5.slice/crio-85f037ab7a03b908500fa78f0dc848eaa29467d2f55de9b2282c96b68f41b908.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422341 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49467157_6fc6_4f0b_b833_1b95a6068d7e.slice/crio-d11d9a2abf3ee6a1d57e2bc700370b8d9789db08d1b83edd743f862fd93867bb.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49467157_6fc6_4f0b_b833_1b95a6068d7e.slice/crio-d11d9a2abf3ee6a1d57e2bc700370b8d9789db08d1b83edd743f862fd93867bb.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422363 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6783a1d3_549e_4077_9898_723d2984e451.slice/crio-f0bca505eb9d5f3462a3843d1bb088cde07d143b82a1065cc24822eeaba09d34.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6783a1d3_549e_4077_9898_723d2984e451.slice/crio-f0bca505eb9d5f3462a3843d1bb088cde07d143b82a1065cc24822eeaba09d34.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422386 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda90aed0b_2281_4055_843d_678b06d52325.slice/crio-conmon-91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda90aed0b_2281_4055_843d_678b06d52325.slice/crio-conmon-91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422409 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aa99c79_80c4_41a9_beaa_32c9643971e5.slice/crio-conmon-6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aa99c79_80c4_41a9_beaa_32c9643971e5.slice/crio-conmon-6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422427 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aa99c79_80c4_41a9_beaa_32c9643971e5.slice/crio-6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aa99c79_80c4_41a9_beaa_32c9643971e5.slice/crio-6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422451 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda90aed0b_2281_4055_843d_678b06d52325.slice/crio-91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda90aed0b_2281_4055_843d_678b06d52325.slice/crio-91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.422474 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-pod7a7b2f24_fc3a_4e6a_92a8_890523c80ce3.slice/crio-341d515ba0261f55f23e27e0a3d2a55c8e711d0e936a14193dcb64f6ce4b6315": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-pod7a7b2f24_fc3a_4e6a_92a8_890523c80ce3.slice/crio-341d515ba0261f55f23e27e0a3d2a55c8e711d0e936a14193dcb64f6ce4b6315: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.427410 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-pod7a7b2f24_fc3a_4e6a_92a8_890523c80ce3.slice/crio-conmon-f4a9d0e37fb9c95f824c825fbe7119b4dd576d46ae94e7fb3a26b8faa96be80f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-pod7a7b2f24_fc3a_4e6a_92a8_890523c80ce3.slice/crio-conmon-f4a9d0e37fb9c95f824c825fbe7119b4dd576d46ae94e7fb3a26b8faa96be80f.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: W0121 09:57:14.429388 5119 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-pod7a7b2f24_fc3a_4e6a_92a8_890523c80ce3.slice/crio-f4a9d0e37fb9c95f824c825fbe7119b4dd576d46ae94e7fb3a26b8faa96be80f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-pod7a7b2f24_fc3a_4e6a_92a8_890523c80ce3.slice/crio-f4a9d0e37fb9c95f824c825fbe7119b4dd576d46ae94e7fb3a26b8faa96be80f.scope: no such file or directory Jan 21 09:57:14 crc kubenswrapper[5119]: E0121 09:57:14.575578 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod7a7b2f24_fc3a_4e6a_92a8_890523c80ce3.slice\": RecentStats: unable to find data in memory cache]" Jan 21 09:57:14 crc kubenswrapper[5119]: I0121 09:57:14.663392 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6z5f6" podUID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerName="registry-server" probeResult="failure" output=< Jan 21 09:57:14 crc kubenswrapper[5119]: timeout: failed to connect service ":50051" within 1s Jan 21 09:57:14 crc kubenswrapper[5119]: > Jan 21 09:57:15 crc kubenswrapper[5119]: I0121 09:57:15.631933 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-bc4nv_b4c88372-61dd-4fb9-8bcf-7c51ec904dd8/kube-multus-additional-cni-plugins/0.log" Jan 21 09:57:15 crc kubenswrapper[5119]: I0121 09:57:15.631973 5119 generic.go:358] "Generic (PLEG): container finished" podID="b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" containerID="522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" exitCode=137 Jan 21 09:57:15 crc kubenswrapper[5119]: I0121 09:57:15.632041 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" event={"ID":"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8","Type":"ContainerDied","Data":"522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f"} Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.130061 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-bc4nv_b4c88372-61dd-4fb9-8bcf-7c51ec904dd8/kube-multus-additional-cni-plugins/0.log" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.130167 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.254056 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-ready\") pod \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.254173 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j8zg\" (UniqueName: \"kubernetes.io/projected/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-kube-api-access-6j8zg\") pod \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.254242 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-cni-sysctl-allowlist\") pod \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.254317 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-tuning-conf-dir\") pod \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\" (UID: \"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8\") " Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.254789 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-ready" (OuterVolumeSpecName: "ready") pod "b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" (UID: "b4c88372-61dd-4fb9-8bcf-7c51ec904dd8"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.254923 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" (UID: "b4c88372-61dd-4fb9-8bcf-7c51ec904dd8"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.255450 5119 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-ready\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.255490 5119 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.255829 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" (UID: "b4c88372-61dd-4fb9-8bcf-7c51ec904dd8"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.261753 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-kube-api-access-6j8zg" (OuterVolumeSpecName: "kube-api-access-6j8zg") pod "b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" (UID: "b4c88372-61dd-4fb9-8bcf-7c51ec904dd8"). InnerVolumeSpecName "kube-api-access-6j8zg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.356864 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6j8zg\" (UniqueName: \"kubernetes.io/projected/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-kube-api-access-6j8zg\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.357251 5119 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.641173 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-bc4nv_b4c88372-61dd-4fb9-8bcf-7c51ec904dd8/kube-multus-additional-cni-plugins/0.log" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.641671 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" event={"ID":"b4c88372-61dd-4fb9-8bcf-7c51ec904dd8","Type":"ContainerDied","Data":"fdf5d15a2079b5dd2ec04c0e5e0a4851ead420df67e7fb15c10de454632501b1"} Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.641742 5119 scope.go:117] "RemoveContainer" containerID="522c9f419263dda7903f58c657727688825beb62fbd951a112617dc2ee68863f" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.641800 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bc4nv" Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.660343 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bc4nv"] Jan 21 09:57:16 crc kubenswrapper[5119]: I0121 09:57:16.661983 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bc4nv"] Jan 21 09:57:18 crc kubenswrapper[5119]: I0121 09:57:18.599965 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" path="/var/lib/kubelet/pods/b4c88372-61dd-4fb9-8bcf-7c51ec904dd8/volumes" Jan 21 09:57:19 crc kubenswrapper[5119]: I0121 09:57:19.908289 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:57:19 crc kubenswrapper[5119]: I0121 09:57:19.908330 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:57:19 crc kubenswrapper[5119]: I0121 09:57:19.969235 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:57:20 crc kubenswrapper[5119]: I0121 09:57:20.073168 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:57:20 crc kubenswrapper[5119]: I0121 09:57:20.073393 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:57:20 crc kubenswrapper[5119]: I0121 09:57:20.114988 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:57:20 crc kubenswrapper[5119]: I0121 09:57:20.449316 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:57:20 crc kubenswrapper[5119]: I0121 09:57:20.449580 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:57:20 crc kubenswrapper[5119]: I0121 09:57:20.505458 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:57:20 crc kubenswrapper[5119]: I0121 09:57:20.597636 5119 ???:1] "http: TLS handshake error from 192.168.126.11:36404: no serving certificate available for the kubelet" Jan 21 09:57:20 crc kubenswrapper[5119]: I0121 09:57:20.699342 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:57:20 crc kubenswrapper[5119]: I0121 09:57:20.700928 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:57:20 crc kubenswrapper[5119]: I0121 09:57:20.712295 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:57:21 crc kubenswrapper[5119]: I0121 09:57:21.598819 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgvgn"] Jan 21 09:57:21 crc kubenswrapper[5119]: I0121 09:57:21.668750 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:57:22 crc kubenswrapper[5119]: I0121 09:57:22.680943 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cgvgn" podUID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" containerName="registry-server" containerID="cri-o://303019e502b4e5dbcde8883528ba993026c2840cf2441b90cde2966207de8c9c" gracePeriod=2 Jan 21 09:57:22 crc kubenswrapper[5119]: I0121 09:57:22.683889 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:57:23 crc kubenswrapper[5119]: I0121 09:57:22.999569 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wn7cq"] Jan 21 09:57:23 crc kubenswrapper[5119]: I0121 09:57:23.000385 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wn7cq" podUID="3f4078de-237b-4252-be46-f0b89d21c8ed" containerName="registry-server" containerID="cri-o://61439cea4b2bc6c759e3fd32e34aa39bea786ec65207e31cd272c9facd9858fe" gracePeriod=2 Jan 21 09:57:23 crc kubenswrapper[5119]: I0121 09:57:23.304822 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:57:23 crc kubenswrapper[5119]: I0121 09:57:23.370572 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:57:23 crc kubenswrapper[5119]: I0121 09:57:23.667755 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:57:23 crc kubenswrapper[5119]: I0121 09:57:23.702670 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:57:23 crc kubenswrapper[5119]: I0121 09:57:23.720242 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.717308 5119 generic.go:358] "Generic (PLEG): container finished" podID="3f4078de-237b-4252-be46-f0b89d21c8ed" containerID="61439cea4b2bc6c759e3fd32e34aa39bea786ec65207e31cd272c9facd9858fe" exitCode=0 Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.717387 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wn7cq" event={"ID":"3f4078de-237b-4252-be46-f0b89d21c8ed","Type":"ContainerDied","Data":"61439cea4b2bc6c759e3fd32e34aa39bea786ec65207e31cd272c9facd9858fe"} Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.719987 5119 generic.go:358] "Generic (PLEG): container finished" podID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" containerID="303019e502b4e5dbcde8883528ba993026c2840cf2441b90cde2966207de8c9c" exitCode=0 Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.720077 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgvgn" event={"ID":"eeb3fd25-b829-4977-aac0-aa2539bf13d0","Type":"ContainerDied","Data":"303019e502b4e5dbcde8883528ba993026c2840cf2441b90cde2966207de8c9c"} Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.830741 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.831447 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" containerName="kube-multus-additional-cni-plugins" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.831473 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" containerName="kube-multus-additional-cni-plugins" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.831494 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9a860d3c-104e-4e29-8722-e6b4c5389f33" containerName="pruner" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.831502 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a860d3c-104e-4e29-8722-e6b4c5389f33" containerName="pruner" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.831521 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a7b2f24-fc3a-4e6a-92a8-890523c80ce3" containerName="pruner" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.831529 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a7b2f24-fc3a-4e6a-92a8-890523c80ce3" containerName="pruner" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.831669 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="b4c88372-61dd-4fb9-8bcf-7c51ec904dd8" containerName="kube-multus-additional-cni-plugins" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.831686 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="7a7b2f24-fc3a-4e6a-92a8-890523c80ce3" containerName="pruner" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.831696 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="9a860d3c-104e-4e29-8722-e6b4c5389f33" containerName="pruner" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.884447 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.884630 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.888888 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.889123 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.933019 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.976248 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-utilities\") pod \"3f4078de-237b-4252-be46-f0b89d21c8ed\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.976373 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-catalog-content\") pod \"3f4078de-237b-4252-be46-f0b89d21c8ed\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.976410 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf9p7\" (UniqueName: \"kubernetes.io/projected/3f4078de-237b-4252-be46-f0b89d21c8ed-kube-api-access-xf9p7\") pod \"3f4078de-237b-4252-be46-f0b89d21c8ed\" (UID: \"3f4078de-237b-4252-be46-f0b89d21c8ed\") " Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.976546 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5119035c-68df-4e77-a6d5-046a55975238-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5119035c-68df-4e77-a6d5-046a55975238\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.976572 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5119035c-68df-4e77-a6d5-046a55975238-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5119035c-68df-4e77-a6d5-046a55975238\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.977687 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-utilities" (OuterVolumeSpecName: "utilities") pod "3f4078de-237b-4252-be46-f0b89d21c8ed" (UID: "3f4078de-237b-4252-be46-f0b89d21c8ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:57:24 crc kubenswrapper[5119]: I0121 09:57:24.997116 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4078de-237b-4252-be46-f0b89d21c8ed-kube-api-access-xf9p7" (OuterVolumeSpecName: "kube-api-access-xf9p7") pod "3f4078de-237b-4252-be46-f0b89d21c8ed" (UID: "3f4078de-237b-4252-be46-f0b89d21c8ed"). InnerVolumeSpecName "kube-api-access-xf9p7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.009003 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f4078de-237b-4252-be46-f0b89d21c8ed" (UID: "3f4078de-237b-4252-be46-f0b89d21c8ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.071299 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.079073 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5119035c-68df-4e77-a6d5-046a55975238-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5119035c-68df-4e77-a6d5-046a55975238\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.079158 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5119035c-68df-4e77-a6d5-046a55975238-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5119035c-68df-4e77-a6d5-046a55975238\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.079359 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.079390 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4078de-237b-4252-be46-f0b89d21c8ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.079406 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xf9p7\" (UniqueName: \"kubernetes.io/projected/3f4078de-237b-4252-be46-f0b89d21c8ed-kube-api-access-xf9p7\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.079922 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5119035c-68df-4e77-a6d5-046a55975238-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5119035c-68df-4e77-a6d5-046a55975238\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.101650 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5119035c-68df-4e77-a6d5-046a55975238-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5119035c-68df-4e77-a6d5-046a55975238\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.180264 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl4kz\" (UniqueName: \"kubernetes.io/projected/eeb3fd25-b829-4977-aac0-aa2539bf13d0-kube-api-access-pl4kz\") pod \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.180364 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-catalog-content\") pod \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.180459 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-utilities\") pod \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\" (UID: \"eeb3fd25-b829-4977-aac0-aa2539bf13d0\") " Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.181454 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-utilities" (OuterVolumeSpecName: "utilities") pod "eeb3fd25-b829-4977-aac0-aa2539bf13d0" (UID: "eeb3fd25-b829-4977-aac0-aa2539bf13d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.185656 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb3fd25-b829-4977-aac0-aa2539bf13d0-kube-api-access-pl4kz" (OuterVolumeSpecName: "kube-api-access-pl4kz") pod "eeb3fd25-b829-4977-aac0-aa2539bf13d0" (UID: "eeb3fd25-b829-4977-aac0-aa2539bf13d0"). InnerVolumeSpecName "kube-api-access-pl4kz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.239355 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eeb3fd25-b829-4977-aac0-aa2539bf13d0" (UID: "eeb3fd25-b829-4977-aac0-aa2539bf13d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.240466 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.282151 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.282182 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pl4kz\" (UniqueName: \"kubernetes.io/projected/eeb3fd25-b829-4977-aac0-aa2539bf13d0-kube-api-access-pl4kz\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.282192 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb3fd25-b829-4977-aac0-aa2539bf13d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.397067 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zm2f7"] Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.397678 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zm2f7" podUID="a90aed0b-2281-4055-843d-678b06d52325" containerName="registry-server" containerID="cri-o://5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b" gracePeriod=2 Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.649072 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 09:57:25 crc kubenswrapper[5119]: W0121 09:57:25.653145 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5119035c_68df_4e77_a6d5_046a55975238.slice/crio-55d354ea28a4f6954999c6a0a7cd77fbf2e0d877610158c39703243fbdef6763 WatchSource:0}: Error finding container 55d354ea28a4f6954999c6a0a7cd77fbf2e0d877610158c39703243fbdef6763: Status 404 returned error can't find the container with id 55d354ea28a4f6954999c6a0a7cd77fbf2e0d877610158c39703243fbdef6763 Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.727752 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgvgn" event={"ID":"eeb3fd25-b829-4977-aac0-aa2539bf13d0","Type":"ContainerDied","Data":"dd8c4b8ccfc14e8cd76f499072f775757b491d23cfb2c9179e4a7c5f9ea07658"} Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.727820 5119 scope.go:117] "RemoveContainer" containerID="303019e502b4e5dbcde8883528ba993026c2840cf2441b90cde2966207de8c9c" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.727820 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgvgn" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.728806 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5119035c-68df-4e77-a6d5-046a55975238","Type":"ContainerStarted","Data":"55d354ea28a4f6954999c6a0a7cd77fbf2e0d877610158c39703243fbdef6763"} Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.731934 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wn7cq" event={"ID":"3f4078de-237b-4252-be46-f0b89d21c8ed","Type":"ContainerDied","Data":"2dbe03a795a16ceeac08bbd07014ea303104947c6fe8d94027a07ca2f3317166"} Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.731979 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wn7cq" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.743930 5119 scope.go:117] "RemoveContainer" containerID="37049f1013f9fec4c633ca41d61b28ee1f5c3a275eb495f20f500c1dafe8255e" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.760272 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wn7cq"] Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.761456 5119 scope.go:117] "RemoveContainer" containerID="be3022aafe9ef17c86556fb4e19eff3f45b3a9f1b6808e9dc99e75a28cd36a03" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.762566 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wn7cq"] Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.799879 5119 scope.go:117] "RemoveContainer" containerID="61439cea4b2bc6c759e3fd32e34aa39bea786ec65207e31cd272c9facd9858fe" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.806176 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgvgn"] Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.806221 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cgvgn"] Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.897294 5119 scope.go:117] "RemoveContainer" containerID="c343b7cdea6e599a63dcb11499b382afbaf4fca4a57a34514e5361dd9b3ba5dd" Jan 21 09:57:25 crc kubenswrapper[5119]: I0121 09:57:25.920060 5119 scope.go:117] "RemoveContainer" containerID="2dda0b36a0e1728d9f7742117b19a325f3e8250cf03c7ef86dcfcd1d6d17eb3f" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.293175 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.397971 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-utilities\") pod \"a90aed0b-2281-4055-843d-678b06d52325\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.398067 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdn8m\" (UniqueName: \"kubernetes.io/projected/a90aed0b-2281-4055-843d-678b06d52325-kube-api-access-cdn8m\") pod \"a90aed0b-2281-4055-843d-678b06d52325\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.398146 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-catalog-content\") pod \"a90aed0b-2281-4055-843d-678b06d52325\" (UID: \"a90aed0b-2281-4055-843d-678b06d52325\") " Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.399554 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-utilities" (OuterVolumeSpecName: "utilities") pod "a90aed0b-2281-4055-843d-678b06d52325" (UID: "a90aed0b-2281-4055-843d-678b06d52325"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.404466 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90aed0b-2281-4055-843d-678b06d52325-kube-api-access-cdn8m" (OuterVolumeSpecName: "kube-api-access-cdn8m") pod "a90aed0b-2281-4055-843d-678b06d52325" (UID: "a90aed0b-2281-4055-843d-678b06d52325"). InnerVolumeSpecName "kube-api-access-cdn8m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.410162 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a90aed0b-2281-4055-843d-678b06d52325" (UID: "a90aed0b-2281-4055-843d-678b06d52325"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.500037 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.500098 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cdn8m\" (UniqueName: \"kubernetes.io/projected/a90aed0b-2281-4055-843d-678b06d52325-kube-api-access-cdn8m\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.500115 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90aed0b-2281-4055-843d-678b06d52325-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.603973 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4078de-237b-4252-be46-f0b89d21c8ed" path="/var/lib/kubelet/pods/3f4078de-237b-4252-be46-f0b89d21c8ed/volumes" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.604683 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" path="/var/lib/kubelet/pods/eeb3fd25-b829-4977-aac0-aa2539bf13d0/volumes" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.737076 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5119035c-68df-4e77-a6d5-046a55975238","Type":"ContainerStarted","Data":"0ae8c9d440708b63c540a95925cbb29bc78ca44b07242526505aa333b77ff9b9"} Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.741918 5119 generic.go:358] "Generic (PLEG): container finished" podID="a90aed0b-2281-4055-843d-678b06d52325" containerID="5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b" exitCode=0 Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.741986 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm2f7" event={"ID":"a90aed0b-2281-4055-843d-678b06d52325","Type":"ContainerDied","Data":"5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b"} Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.741992 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zm2f7" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.742008 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm2f7" event={"ID":"a90aed0b-2281-4055-843d-678b06d52325","Type":"ContainerDied","Data":"d8c7c502c61fd9683e0fe4b7062fc7ab35cc5b8d304334e5c2286beaa7e91917"} Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.742030 5119 scope.go:117] "RemoveContainer" containerID="5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.758074 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=2.758053155 podStartE2EDuration="2.758053155s" podCreationTimestamp="2026-01-21 09:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:57:26.753462803 +0000 UTC m=+162.421554501" watchObservedRunningTime="2026-01-21 09:57:26.758053155 +0000 UTC m=+162.426144843" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.763152 5119 scope.go:117] "RemoveContainer" containerID="91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.771159 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zm2f7"] Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.778691 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zm2f7"] Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.780258 5119 scope.go:117] "RemoveContainer" containerID="b71975588fd89225ab1f014a5f733f05eeb18ff718934d72fcfe7806c9811738" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.801142 5119 scope.go:117] "RemoveContainer" containerID="5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b" Jan 21 09:57:26 crc kubenswrapper[5119]: E0121 09:57:26.802838 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b\": container with ID starting with 5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b not found: ID does not exist" containerID="5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.802877 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b"} err="failed to get container status \"5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b\": rpc error: code = NotFound desc = could not find container \"5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b\": container with ID starting with 5bda40e03e451706bfc0cb16ad5ee7f7b48ac528af04387a2a0b6e561a70281b not found: ID does not exist" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.802920 5119 scope.go:117] "RemoveContainer" containerID="91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002" Jan 21 09:57:26 crc kubenswrapper[5119]: E0121 09:57:26.804826 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002\": container with ID starting with 91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002 not found: ID does not exist" containerID="91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.804858 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002"} err="failed to get container status \"91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002\": rpc error: code = NotFound desc = could not find container \"91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002\": container with ID starting with 91fa5f62fb858a325b272d329fa9c20a5a9c7d7e31323aa89546359971d2a002 not found: ID does not exist" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.804880 5119 scope.go:117] "RemoveContainer" containerID="b71975588fd89225ab1f014a5f733f05eeb18ff718934d72fcfe7806c9811738" Jan 21 09:57:26 crc kubenswrapper[5119]: E0121 09:57:26.805157 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b71975588fd89225ab1f014a5f733f05eeb18ff718934d72fcfe7806c9811738\": container with ID starting with b71975588fd89225ab1f014a5f733f05eeb18ff718934d72fcfe7806c9811738 not found: ID does not exist" containerID="b71975588fd89225ab1f014a5f733f05eeb18ff718934d72fcfe7806c9811738" Jan 21 09:57:26 crc kubenswrapper[5119]: I0121 09:57:26.805179 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b71975588fd89225ab1f014a5f733f05eeb18ff718934d72fcfe7806c9811738"} err="failed to get container status \"b71975588fd89225ab1f014a5f733f05eeb18ff718934d72fcfe7806c9811738\": rpc error: code = NotFound desc = could not find container \"b71975588fd89225ab1f014a5f733f05eeb18ff718934d72fcfe7806c9811738\": container with ID starting with b71975588fd89225ab1f014a5f733f05eeb18ff718934d72fcfe7806c9811738 not found: ID does not exist" Jan 21 09:57:27 crc kubenswrapper[5119]: I0121 09:57:27.754890 5119 generic.go:358] "Generic (PLEG): container finished" podID="5119035c-68df-4e77-a6d5-046a55975238" containerID="0ae8c9d440708b63c540a95925cbb29bc78ca44b07242526505aa333b77ff9b9" exitCode=0 Jan 21 09:57:27 crc kubenswrapper[5119]: I0121 09:57:27.754990 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5119035c-68df-4e77-a6d5-046a55975238","Type":"ContainerDied","Data":"0ae8c9d440708b63c540a95925cbb29bc78ca44b07242526505aa333b77ff9b9"} Jan 21 09:57:27 crc kubenswrapper[5119]: I0121 09:57:27.805004 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6z5f6"] Jan 21 09:57:27 crc kubenswrapper[5119]: I0121 09:57:27.805675 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6z5f6" podUID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerName="registry-server" containerID="cri-o://11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210" gracePeriod=2 Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.235316 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.326308 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-catalog-content\") pod \"4aa99c79-80c4-41a9-beaa-32c9643971e5\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.326655 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4hgg\" (UniqueName: \"kubernetes.io/projected/4aa99c79-80c4-41a9-beaa-32c9643971e5-kube-api-access-z4hgg\") pod \"4aa99c79-80c4-41a9-beaa-32c9643971e5\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.326733 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-utilities\") pod \"4aa99c79-80c4-41a9-beaa-32c9643971e5\" (UID: \"4aa99c79-80c4-41a9-beaa-32c9643971e5\") " Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.327677 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-utilities" (OuterVolumeSpecName: "utilities") pod "4aa99c79-80c4-41a9-beaa-32c9643971e5" (UID: "4aa99c79-80c4-41a9-beaa-32c9643971e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.337862 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa99c79-80c4-41a9-beaa-32c9643971e5-kube-api-access-z4hgg" (OuterVolumeSpecName: "kube-api-access-z4hgg") pod "4aa99c79-80c4-41a9-beaa-32c9643971e5" (UID: "4aa99c79-80c4-41a9-beaa-32c9643971e5"). InnerVolumeSpecName "kube-api-access-z4hgg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.428843 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z4hgg\" (UniqueName: \"kubernetes.io/projected/4aa99c79-80c4-41a9-beaa-32c9643971e5-kube-api-access-z4hgg\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.428878 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.438054 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4aa99c79-80c4-41a9-beaa-32c9643971e5" (UID: "4aa99c79-80c4-41a9-beaa-32c9643971e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.530148 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aa99c79-80c4-41a9-beaa-32c9643971e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.613243 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a90aed0b-2281-4055-843d-678b06d52325" path="/var/lib/kubelet/pods/a90aed0b-2281-4055-843d-678b06d52325/volumes" Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.769224 5119 generic.go:358] "Generic (PLEG): container finished" podID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerID="11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210" exitCode=0 Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.769275 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z5f6" event={"ID":"4aa99c79-80c4-41a9-beaa-32c9643971e5","Type":"ContainerDied","Data":"11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210"} Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.769327 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z5f6" event={"ID":"4aa99c79-80c4-41a9-beaa-32c9643971e5","Type":"ContainerDied","Data":"eb3e9603d31bbfd272c369dcd5bed4cb294630784892ee895101a4150a09890b"} Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.769347 5119 scope.go:117] "RemoveContainer" containerID="11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210" Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.771757 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z5f6" Jan 21 09:57:28 crc kubenswrapper[5119]: I0121 09:57:28.799202 5119 scope.go:117] "RemoveContainer" containerID="6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.095690 5119 scope.go:117] "RemoveContainer" containerID="1e64d330d2b30a41021ba0bdf92675ab5e83830d9985b9409573304ac6eef7d6" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.123005 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6z5f6"] Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.127345 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6z5f6"] Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.150362 5119 scope.go:117] "RemoveContainer" containerID="11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210" Jan 21 09:57:29 crc kubenswrapper[5119]: E0121 09:57:29.153706 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210\": container with ID starting with 11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210 not found: ID does not exist" containerID="11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.153751 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210"} err="failed to get container status \"11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210\": rpc error: code = NotFound desc = could not find container \"11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210\": container with ID starting with 11f5c68abfa93991b35bf5d9b963d44c3e189103a316dc06052623dbb3b89210 not found: ID does not exist" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.153781 5119 scope.go:117] "RemoveContainer" containerID="6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d" Jan 21 09:57:29 crc kubenswrapper[5119]: E0121 09:57:29.154130 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d\": container with ID starting with 6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d not found: ID does not exist" containerID="6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.154158 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d"} err="failed to get container status \"6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d\": rpc error: code = NotFound desc = could not find container \"6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d\": container with ID starting with 6a18e62518b6c198ac658e9919d776a7422bbfa9b94b2fed2d763b9de6904c2d not found: ID does not exist" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.154170 5119 scope.go:117] "RemoveContainer" containerID="1e64d330d2b30a41021ba0bdf92675ab5e83830d9985b9409573304ac6eef7d6" Jan 21 09:57:29 crc kubenswrapper[5119]: E0121 09:57:29.154334 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e64d330d2b30a41021ba0bdf92675ab5e83830d9985b9409573304ac6eef7d6\": container with ID starting with 1e64d330d2b30a41021ba0bdf92675ab5e83830d9985b9409573304ac6eef7d6 not found: ID does not exist" containerID="1e64d330d2b30a41021ba0bdf92675ab5e83830d9985b9409573304ac6eef7d6" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.154348 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e64d330d2b30a41021ba0bdf92675ab5e83830d9985b9409573304ac6eef7d6"} err="failed to get container status \"1e64d330d2b30a41021ba0bdf92675ab5e83830d9985b9409573304ac6eef7d6\": rpc error: code = NotFound desc = could not find container \"1e64d330d2b30a41021ba0bdf92675ab5e83830d9985b9409573304ac6eef7d6\": container with ID starting with 1e64d330d2b30a41021ba0bdf92675ab5e83830d9985b9409573304ac6eef7d6 not found: ID does not exist" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.266633 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.343236 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5119035c-68df-4e77-a6d5-046a55975238-kubelet-dir\") pod \"5119035c-68df-4e77-a6d5-046a55975238\" (UID: \"5119035c-68df-4e77-a6d5-046a55975238\") " Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.343324 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5119035c-68df-4e77-a6d5-046a55975238-kube-api-access\") pod \"5119035c-68df-4e77-a6d5-046a55975238\" (UID: \"5119035c-68df-4e77-a6d5-046a55975238\") " Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.343386 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5119035c-68df-4e77-a6d5-046a55975238-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5119035c-68df-4e77-a6d5-046a55975238" (UID: "5119035c-68df-4e77-a6d5-046a55975238"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.343586 5119 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5119035c-68df-4e77-a6d5-046a55975238-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.350713 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5119035c-68df-4e77-a6d5-046a55975238-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5119035c-68df-4e77-a6d5-046a55975238" (UID: "5119035c-68df-4e77-a6d5-046a55975238"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.444671 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5119035c-68df-4e77-a6d5-046a55975238-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.777030 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5119035c-68df-4e77-a6d5-046a55975238","Type":"ContainerDied","Data":"55d354ea28a4f6954999c6a0a7cd77fbf2e0d877610158c39703243fbdef6763"} Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.777889 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55d354ea28a4f6954999c6a0a7cd77fbf2e0d877610158c39703243fbdef6763" Jan 21 09:57:29 crc kubenswrapper[5119]: I0121 09:57:29.777104 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:57:30 crc kubenswrapper[5119]: I0121 09:57:30.596811 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa99c79-80c4-41a9-beaa-32c9643971e5" path="/var/lib/kubelet/pods/4aa99c79-80c4-41a9-beaa-32c9643971e5/volumes" Jan 21 09:57:30 crc kubenswrapper[5119]: I0121 09:57:30.597804 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023220 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023868 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f4078de-237b-4252-be46-f0b89d21c8ed" containerName="extract-content" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023885 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4078de-237b-4252-be46-f0b89d21c8ed" containerName="extract-content" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023900 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f4078de-237b-4252-be46-f0b89d21c8ed" containerName="extract-utilities" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023907 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4078de-237b-4252-be46-f0b89d21c8ed" containerName="extract-utilities" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023919 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerName="extract-utilities" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023927 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerName="extract-utilities" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023936 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerName="extract-content" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023946 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerName="extract-content" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023958 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" containerName="extract-utilities" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023965 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" containerName="extract-utilities" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023973 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a90aed0b-2281-4055-843d-678b06d52325" containerName="extract-utilities" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023979 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90aed0b-2281-4055-843d-678b06d52325" containerName="extract-utilities" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023988 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f4078de-237b-4252-be46-f0b89d21c8ed" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.023995 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4078de-237b-4252-be46-f0b89d21c8ed" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024008 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5119035c-68df-4e77-a6d5-046a55975238" containerName="pruner" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024015 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="5119035c-68df-4e77-a6d5-046a55975238" containerName="pruner" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024024 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024031 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024041 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a90aed0b-2281-4055-843d-678b06d52325" containerName="extract-content" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024048 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90aed0b-2281-4055-843d-678b06d52325" containerName="extract-content" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024075 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024083 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024099 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" containerName="extract-content" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024107 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" containerName="extract-content" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024116 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a90aed0b-2281-4055-843d-678b06d52325" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024123 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90aed0b-2281-4055-843d-678b06d52325" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024241 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="eeb3fd25-b829-4977-aac0-aa2539bf13d0" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024258 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f4078de-237b-4252-be46-f0b89d21c8ed" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024270 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4aa99c79-80c4-41a9-beaa-32c9643971e5" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024282 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="5119035c-68df-4e77-a6d5-046a55975238" containerName="pruner" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.024291 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="a90aed0b-2281-4055-843d-678b06d52325" containerName="registry-server" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.030086 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.032132 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.032706 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.039113 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.164716 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-var-lock\") pod \"installer-12-crc\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.164834 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5058f05d-8a72-417d-9207-5d43f75e61ac-kube-api-access\") pod \"installer-12-crc\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.164907 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-kubelet-dir\") pod \"installer-12-crc\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.266999 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5058f05d-8a72-417d-9207-5d43f75e61ac-kube-api-access\") pod \"installer-12-crc\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.267149 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-kubelet-dir\") pod \"installer-12-crc\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.267199 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-var-lock\") pod \"installer-12-crc\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.267355 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-kubelet-dir\") pod \"installer-12-crc\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.267400 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-var-lock\") pod \"installer-12-crc\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.284865 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5058f05d-8a72-417d-9207-5d43f75e61ac-kube-api-access\") pod \"installer-12-crc\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.348218 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:57:31 crc kubenswrapper[5119]: I0121 09:57:31.822727 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 09:57:32 crc kubenswrapper[5119]: I0121 09:57:32.806907 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"5058f05d-8a72-417d-9207-5d43f75e61ac","Type":"ContainerStarted","Data":"bb02c3c56a4060b4926a9d999b868ae013b326b680c5efe7da1178a4b27d0e35"} Jan 21 09:57:32 crc kubenswrapper[5119]: I0121 09:57:32.807413 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"5058f05d-8a72-417d-9207-5d43f75e61ac","Type":"ContainerStarted","Data":"e49008443d287b8d97bd14e5c72922ed546beff20226a9538478c63318db2aad"} Jan 21 09:57:32 crc kubenswrapper[5119]: I0121 09:57:32.824689 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=1.8246731980000002 podStartE2EDuration="1.824673198s" podCreationTimestamp="2026-01-21 09:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:57:32.821466682 +0000 UTC m=+168.489558380" watchObservedRunningTime="2026-01-21 09:57:32.824673198 +0000 UTC m=+168.492764876" Jan 21 09:58:01 crc kubenswrapper[5119]: I0121 09:58:01.586371 5119 ???:1] "http: TLS handshake error from 192.168.126.11:34596: no serving certificate available for the kubelet" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.371731 5119 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.402517 5119 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.402699 5119 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.402838 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.403676 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b" gracePeriod=15 Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.403827 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736" gracePeriod=15 Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.403731 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291" gracePeriod=15 Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.403898 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955" gracePeriod=15 Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404097 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404123 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404146 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404158 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404194 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404206 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404227 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404239 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404251 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404263 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404285 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404297 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404309 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404320 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404335 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404346 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404375 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404386 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404647 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404671 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404687 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404699 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404715 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404737 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404755 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404770 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404757 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2" gracePeriod=15 Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404935 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.404950 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.405284 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.408010 5119 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.486391 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: E0121 09:58:10.487063 5119 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.584032 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.584342 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.584367 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.584393 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.584426 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.584448 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.584465 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.584485 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.584505 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.584563 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.685823 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.685927 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686006 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686070 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686103 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686105 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686186 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686232 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686278 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686320 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686339 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686320 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686349 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686368 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686320 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686382 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686422 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686497 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686580 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.686654 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: I0121 09:58:10.787649 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:10 crc kubenswrapper[5119]: E0121 09:58:10.812217 5119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cb691c33b552a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:58:10.811721002 +0000 UTC m=+206.479812680,LastTimestamp:2026-01-21 09:58:10.811721002 +0000 UTC m=+206.479812680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:58:11 crc kubenswrapper[5119]: I0121 09:58:11.019179 5119 generic.go:358] "Generic (PLEG): container finished" podID="5058f05d-8a72-417d-9207-5d43f75e61ac" containerID="bb02c3c56a4060b4926a9d999b868ae013b326b680c5efe7da1178a4b27d0e35" exitCode=0 Jan 21 09:58:11 crc kubenswrapper[5119]: I0121 09:58:11.019265 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"5058f05d-8a72-417d-9207-5d43f75e61ac","Type":"ContainerDied","Data":"bb02c3c56a4060b4926a9d999b868ae013b326b680c5efe7da1178a4b27d0e35"} Jan 21 09:58:11 crc kubenswrapper[5119]: I0121 09:58:11.020311 5119 status_manager.go:895] "Failed to get status for pod" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:11 crc kubenswrapper[5119]: I0121 09:58:11.021693 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 09:58:11 crc kubenswrapper[5119]: I0121 09:58:11.022708 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 09:58:11 crc kubenswrapper[5119]: I0121 09:58:11.023292 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291" exitCode=0 Jan 21 09:58:11 crc kubenswrapper[5119]: I0121 09:58:11.023308 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2" exitCode=0 Jan 21 09:58:11 crc kubenswrapper[5119]: I0121 09:58:11.023315 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736" exitCode=2 Jan 21 09:58:11 crc kubenswrapper[5119]: I0121 09:58:11.023389 5119 scope.go:117] "RemoveContainer" containerID="af966673193b18b94972ab97e230ca79320cda15af6c9ff9d82808e946b8c6a8" Jan 21 09:58:11 crc kubenswrapper[5119]: I0121 09:58:11.024997 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"18c19dcaeaf3324cd40f1b4bb8613403c0ad32e44fdfa32884111a4f36a031e6"} Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.033768 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.036426 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"3fdc02fa7f3761265d64ca08d5126bb335077089104d5dd63eee678d0fdc458c"} Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.036869 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.037419 5119 status_manager.go:895] "Failed to get status for pod" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:12 crc kubenswrapper[5119]: E0121 09:58:12.037650 5119 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.301137 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.302539 5119 status_manager.go:895] "Failed to get status for pod" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.408266 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5058f05d-8a72-417d-9207-5d43f75e61ac-kube-api-access\") pod \"5058f05d-8a72-417d-9207-5d43f75e61ac\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.408313 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-kubelet-dir\") pod \"5058f05d-8a72-417d-9207-5d43f75e61ac\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.408338 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-var-lock\") pod \"5058f05d-8a72-417d-9207-5d43f75e61ac\" (UID: \"5058f05d-8a72-417d-9207-5d43f75e61ac\") " Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.408477 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5058f05d-8a72-417d-9207-5d43f75e61ac" (UID: "5058f05d-8a72-417d-9207-5d43f75e61ac"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.408557 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-var-lock" (OuterVolumeSpecName: "var-lock") pod "5058f05d-8a72-417d-9207-5d43f75e61ac" (UID: "5058f05d-8a72-417d-9207-5d43f75e61ac"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.408781 5119 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.408817 5119 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5058f05d-8a72-417d-9207-5d43f75e61ac-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.416719 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5058f05d-8a72-417d-9207-5d43f75e61ac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5058f05d-8a72-417d-9207-5d43f75e61ac" (UID: "5058f05d-8a72-417d-9207-5d43f75e61ac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:58:12 crc kubenswrapper[5119]: I0121 09:58:12.510239 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5058f05d-8a72-417d-9207-5d43f75e61ac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:12 crc kubenswrapper[5119]: E0121 09:58:12.588870 5119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cb691c33b552a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:58:10.811721002 +0000 UTC m=+206.479812680,LastTimestamp:2026-01-21 09:58:10.811721002 +0000 UTC m=+206.479812680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:58:13 crc kubenswrapper[5119]: I0121 09:58:13.042296 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"5058f05d-8a72-417d-9207-5d43f75e61ac","Type":"ContainerDied","Data":"e49008443d287b8d97bd14e5c72922ed546beff20226a9538478c63318db2aad"} Jan 21 09:58:13 crc kubenswrapper[5119]: I0121 09:58:13.042559 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e49008443d287b8d97bd14e5c72922ed546beff20226a9538478c63318db2aad" Jan 21 09:58:13 crc kubenswrapper[5119]: I0121 09:58:13.042333 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:58:13 crc kubenswrapper[5119]: I0121 09:58:13.047185 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 09:58:13 crc kubenswrapper[5119]: I0121 09:58:13.047519 5119 status_manager.go:895] "Failed to get status for pod" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:13 crc kubenswrapper[5119]: I0121 09:58:13.048507 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b" exitCode=0 Jan 21 09:58:13 crc kubenswrapper[5119]: I0121 09:58:13.049034 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:13 crc kubenswrapper[5119]: E0121 09:58:13.049681 5119 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:14 crc kubenswrapper[5119]: I0121 09:58:14.597233 5119 status_manager.go:895] "Failed to get status for pod" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.653174 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.654821 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.657582 5119 status_manager.go:895] "Failed to get status for pod" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.662164 5119 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.761560 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.761672 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.761709 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.761846 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.761892 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.761903 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.761954 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.762016 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.762338 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.762522 5119 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.762541 5119 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.762549 5119 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.762561 5119 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.764835 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:58:15 crc kubenswrapper[5119]: I0121 09:58:15.863942 5119 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.082746 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.087186 5119 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955" exitCode=0 Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.087348 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.087386 5119 scope.go:117] "RemoveContainer" containerID="fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.104363 5119 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.104709 5119 status_manager.go:895] "Failed to get status for pod" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.110650 5119 scope.go:117] "RemoveContainer" containerID="f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.129283 5119 scope.go:117] "RemoveContainer" containerID="727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.154192 5119 scope.go:117] "RemoveContainer" containerID="80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.176687 5119 scope.go:117] "RemoveContainer" containerID="abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.197114 5119 scope.go:117] "RemoveContainer" containerID="6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.267587 5119 scope.go:117] "RemoveContainer" containerID="fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291" Jan 21 09:58:16 crc kubenswrapper[5119]: E0121 09:58:16.268305 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291\": container with ID starting with fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291 not found: ID does not exist" containerID="fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.268345 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291"} err="failed to get container status \"fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291\": rpc error: code = NotFound desc = could not find container \"fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291\": container with ID starting with fbf74d1fc692a3245ff41794add57b6678b1278b4174c1a9ac83e01e90150291 not found: ID does not exist" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.268370 5119 scope.go:117] "RemoveContainer" containerID="f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2" Jan 21 09:58:16 crc kubenswrapper[5119]: E0121 09:58:16.268625 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2\": container with ID starting with f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2 not found: ID does not exist" containerID="f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.268657 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2"} err="failed to get container status \"f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2\": rpc error: code = NotFound desc = could not find container \"f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2\": container with ID starting with f92a4390d94c573772e9bff65e3727a393648086e6c71d37714995ecbf1659a2 not found: ID does not exist" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.268674 5119 scope.go:117] "RemoveContainer" containerID="727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955" Jan 21 09:58:16 crc kubenswrapper[5119]: E0121 09:58:16.268991 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955\": container with ID starting with 727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955 not found: ID does not exist" containerID="727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.269015 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955"} err="failed to get container status \"727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955\": rpc error: code = NotFound desc = could not find container \"727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955\": container with ID starting with 727f79868ee86317b4673832731b31a9bfda9544199d9a8a15703907dbcc5955 not found: ID does not exist" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.269030 5119 scope.go:117] "RemoveContainer" containerID="80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736" Jan 21 09:58:16 crc kubenswrapper[5119]: E0121 09:58:16.269284 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736\": container with ID starting with 80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736 not found: ID does not exist" containerID="80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.269312 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736"} err="failed to get container status \"80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736\": rpc error: code = NotFound desc = could not find container \"80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736\": container with ID starting with 80a2b2ffd835f4cb06510091082012d441852e2432e9b9235a1523ae4ccc2736 not found: ID does not exist" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.269330 5119 scope.go:117] "RemoveContainer" containerID="abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b" Jan 21 09:58:16 crc kubenswrapper[5119]: E0121 09:58:16.269743 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b\": container with ID starting with abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b not found: ID does not exist" containerID="abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.269795 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b"} err="failed to get container status \"abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b\": rpc error: code = NotFound desc = could not find container \"abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b\": container with ID starting with abe88e871fc9fda8e6ae7b32df7e0c1c09d159aad9289a8cbdd48ed4d8b3a85b not found: ID does not exist" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.269827 5119 scope.go:117] "RemoveContainer" containerID="6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a" Jan 21 09:58:16 crc kubenswrapper[5119]: E0121 09:58:16.270208 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a\": container with ID starting with 6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a not found: ID does not exist" containerID="6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.270233 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a"} err="failed to get container status \"6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a\": rpc error: code = NotFound desc = could not find container \"6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a\": container with ID starting with 6bc754856ba8182d75875ffd6507b081cb1e493e98c630dc820f040589b4412a not found: ID does not exist" Jan 21 09:58:16 crc kubenswrapper[5119]: I0121 09:58:16.600140 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 21 09:58:19 crc kubenswrapper[5119]: E0121 09:58:19.455468 5119 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:19 crc kubenswrapper[5119]: E0121 09:58:19.455930 5119 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:19 crc kubenswrapper[5119]: E0121 09:58:19.456250 5119 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:19 crc kubenswrapper[5119]: E0121 09:58:19.456596 5119 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:19 crc kubenswrapper[5119]: E0121 09:58:19.456983 5119 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:19 crc kubenswrapper[5119]: I0121 09:58:19.457026 5119 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 09:58:19 crc kubenswrapper[5119]: E0121 09:58:19.457358 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Jan 21 09:58:19 crc kubenswrapper[5119]: E0121 09:58:19.658294 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Jan 21 09:58:20 crc kubenswrapper[5119]: E0121 09:58:20.059973 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Jan 21 09:58:20 crc kubenswrapper[5119]: E0121 09:58:20.861430 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Jan 21 09:58:22 crc kubenswrapper[5119]: E0121 09:58:22.462595 5119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="3.2s" Jan 21 09:58:22 crc kubenswrapper[5119]: I0121 09:58:22.590399 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:22 crc kubenswrapper[5119]: E0121 09:58:22.590558 5119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cb691c33b552a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:58:10.811721002 +0000 UTC m=+206.479812680,LastTimestamp:2026-01-21 09:58:10.811721002 +0000 UTC m=+206.479812680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:58:22 crc kubenswrapper[5119]: I0121 09:58:22.591440 5119 status_manager.go:895] "Failed to get status for pod" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:22 crc kubenswrapper[5119]: I0121 09:58:22.614308 5119 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ed9b586-3ebc-4f27-bf76-88b6622745c6" Jan 21 09:58:22 crc kubenswrapper[5119]: I0121 09:58:22.614544 5119 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ed9b586-3ebc-4f27-bf76-88b6622745c6" Jan 21 09:58:22 crc kubenswrapper[5119]: E0121 09:58:22.614921 5119 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:22 crc kubenswrapper[5119]: I0121 09:58:22.615173 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:23 crc kubenswrapper[5119]: I0121 09:58:23.134363 5119 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="40075b79c26ce776b2b688eba4f7ca02d17848dd4536ba243d102f753f4eba54" exitCode=0 Jan 21 09:58:23 crc kubenswrapper[5119]: I0121 09:58:23.134423 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"40075b79c26ce776b2b688eba4f7ca02d17848dd4536ba243d102f753f4eba54"} Jan 21 09:58:23 crc kubenswrapper[5119]: I0121 09:58:23.134598 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ac895b34c2d686f22f381449a45ea2e48a52766e2588071e3772c7a974096060"} Jan 21 09:58:23 crc kubenswrapper[5119]: I0121 09:58:23.134985 5119 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ed9b586-3ebc-4f27-bf76-88b6622745c6" Jan 21 09:58:23 crc kubenswrapper[5119]: I0121 09:58:23.135003 5119 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ed9b586-3ebc-4f27-bf76-88b6622745c6" Jan 21 09:58:23 crc kubenswrapper[5119]: E0121 09:58:23.135396 5119 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:23 crc kubenswrapper[5119]: I0121 09:58:23.136018 5119 status_manager.go:895] "Failed to get status for pod" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Jan 21 09:58:24 crc kubenswrapper[5119]: I0121 09:58:24.147370 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3c73dfc86f93157060d8878adee46ed83d41ab49751fb3362ec469d2fd12a362"} Jan 21 09:58:24 crc kubenswrapper[5119]: I0121 09:58:24.147689 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"023cfea75729d3d7df29f57670cd7b633e2d7964a56af3f55897146c3005de2f"} Jan 21 09:58:24 crc kubenswrapper[5119]: I0121 09:58:24.147700 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"7bc6d50be6f8ed5982249dadbd38f5c766b9eafef68cd908b0a1f1f71a67aee5"} Jan 21 09:58:25 crc kubenswrapper[5119]: I0121 09:58:25.154491 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"e7e6097c29c97ecc9057e1213038994df415d332d38e4c606be74d8bea14d62f"} Jan 21 09:58:25 crc kubenswrapper[5119]: I0121 09:58:25.154910 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6eeda904cc2ec083bec0c5e99c8a6d57773f1f42a0f74c5299bd7a7d5a783bfb"} Jan 21 09:58:25 crc kubenswrapper[5119]: I0121 09:58:25.155124 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:25 crc kubenswrapper[5119]: I0121 09:58:25.155461 5119 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ed9b586-3ebc-4f27-bf76-88b6622745c6" Jan 21 09:58:25 crc kubenswrapper[5119]: I0121 09:58:25.155569 5119 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ed9b586-3ebc-4f27-bf76-88b6622745c6" Jan 21 09:58:25 crc kubenswrapper[5119]: I0121 09:58:25.156764 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:58:25 crc kubenswrapper[5119]: I0121 09:58:25.156809 5119 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="6a96bc82968554f4963af69b0650f31ed23187fc0611ce4a942afca605f17b35" exitCode=1 Jan 21 09:58:25 crc kubenswrapper[5119]: I0121 09:58:25.156886 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"6a96bc82968554f4963af69b0650f31ed23187fc0611ce4a942afca605f17b35"} Jan 21 09:58:25 crc kubenswrapper[5119]: I0121 09:58:25.157346 5119 scope.go:117] "RemoveContainer" containerID="6a96bc82968554f4963af69b0650f31ed23187fc0611ce4a942afca605f17b35" Jan 21 09:58:26 crc kubenswrapper[5119]: I0121 09:58:26.168199 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:58:26 crc kubenswrapper[5119]: I0121 09:58:26.168720 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"527df225d80b2fd89bff8e352d14b6bcb74d7760079479c84d33c46ecccdff52"} Jan 21 09:58:27 crc kubenswrapper[5119]: I0121 09:58:27.616477 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:27 crc kubenswrapper[5119]: I0121 09:58:27.616866 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:27 crc kubenswrapper[5119]: I0121 09:58:27.625903 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:28 crc kubenswrapper[5119]: I0121 09:58:28.767349 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:58:28 crc kubenswrapper[5119]: I0121 09:58:28.768241 5119 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 09:58:28 crc kubenswrapper[5119]: I0121 09:58:28.768291 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 09:58:30 crc kubenswrapper[5119]: I0121 09:58:30.457311 5119 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:30 crc kubenswrapper[5119]: I0121 09:58:30.457351 5119 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:30 crc kubenswrapper[5119]: I0121 09:58:30.499420 5119 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ed9b586-3ebc-4f27-bf76-88b6622745c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:58:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:58:23Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:58:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:58:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://40075b79c26ce776b2b688eba4f7ca02d17848dd4536ba243d102f753f4eba54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40075b79c26ce776b2b688eba4f7ca02d17848dd4536ba243d102f753f4eba54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:58:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"5ed9b586-3ebc-4f27-bf76-88b6622745c6\": field is immutable" Jan 21 09:58:30 crc kubenswrapper[5119]: I0121 09:58:30.520021 5119 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="3386917a-5b6a-482c-ada1-08f96648ea26" Jan 21 09:58:31 crc kubenswrapper[5119]: I0121 09:58:31.194991 5119 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ed9b586-3ebc-4f27-bf76-88b6622745c6" Jan 21 09:58:31 crc kubenswrapper[5119]: I0121 09:58:31.195043 5119 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ed9b586-3ebc-4f27-bf76-88b6622745c6" Jan 21 09:58:31 crc kubenswrapper[5119]: I0121 09:58:31.198164 5119 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="3386917a-5b6a-482c-ada1-08f96648ea26" Jan 21 09:58:31 crc kubenswrapper[5119]: I0121 09:58:31.199782 5119 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://7bc6d50be6f8ed5982249dadbd38f5c766b9eafef68cd908b0a1f1f71a67aee5" Jan 21 09:58:31 crc kubenswrapper[5119]: I0121 09:58:31.199820 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:32 crc kubenswrapper[5119]: I0121 09:58:32.199876 5119 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ed9b586-3ebc-4f27-bf76-88b6622745c6" Jan 21 09:58:32 crc kubenswrapper[5119]: I0121 09:58:32.199905 5119 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5ed9b586-3ebc-4f27-bf76-88b6622745c6" Jan 21 09:58:32 crc kubenswrapper[5119]: I0121 09:58:32.202709 5119 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="3386917a-5b6a-482c-ada1-08f96648ea26" Jan 21 09:58:32 crc kubenswrapper[5119]: I0121 09:58:32.651070 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:58:38 crc kubenswrapper[5119]: I0121 09:58:38.767749 5119 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 09:58:38 crc kubenswrapper[5119]: I0121 09:58:38.768798 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 09:58:40 crc kubenswrapper[5119]: I0121 09:58:40.939313 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 21 09:58:41 crc kubenswrapper[5119]: I0121 09:58:41.044879 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 21 09:58:42 crc kubenswrapper[5119]: I0121 09:58:42.002210 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 21 09:58:42 crc kubenswrapper[5119]: I0121 09:58:42.145469 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 21 09:58:42 crc kubenswrapper[5119]: I0121 09:58:42.582546 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 21 09:58:42 crc kubenswrapper[5119]: I0121 09:58:42.756463 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 21 09:58:42 crc kubenswrapper[5119]: I0121 09:58:42.776631 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:42 crc kubenswrapper[5119]: I0121 09:58:42.909011 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 21 09:58:43 crc kubenswrapper[5119]: I0121 09:58:43.058777 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 21 09:58:43 crc kubenswrapper[5119]: I0121 09:58:43.059077 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 21 09:58:43 crc kubenswrapper[5119]: I0121 09:58:43.071222 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 21 09:58:43 crc kubenswrapper[5119]: I0121 09:58:43.315991 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 21 09:58:43 crc kubenswrapper[5119]: I0121 09:58:43.510069 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:43 crc kubenswrapper[5119]: I0121 09:58:43.547992 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 21 09:58:43 crc kubenswrapper[5119]: I0121 09:58:43.806290 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.273899 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.410657 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.474021 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.491863 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.514883 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.527439 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.552041 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.564985 5119 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.575499 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.699122 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.747198 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 21 09:58:44 crc kubenswrapper[5119]: I0121 09:58:44.921306 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.079732 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.176318 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.194414 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.199175 5119 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.255433 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.261995 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.276030 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.297050 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.358462 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.364194 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.458433 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.512416 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.534528 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.665218 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.731029 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.767529 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.839073 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 21 09:58:45 crc kubenswrapper[5119]: I0121 09:58:45.900207 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.023493 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.045895 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.168999 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.264302 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.266555 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.323382 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.339992 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.359274 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.363510 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.421904 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.442314 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.621605 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.683528 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.701669 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.765519 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.788264 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.833506 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.868316 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.874576 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.892129 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.914481 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.963405 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:46 crc kubenswrapper[5119]: I0121 09:58:46.985032 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.009756 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.016935 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.038568 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.040184 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.122196 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.141829 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.160870 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.176384 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.306431 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.315279 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.320077 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.325284 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.326858 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.339125 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.361156 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.391399 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.408848 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.460315 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.471857 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.495091 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.507281 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.595578 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.630129 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.647023 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.722300 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.733845 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.757710 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.782696 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.839049 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.882686 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.949292 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.980789 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 21 09:58:47 crc kubenswrapper[5119]: I0121 09:58:47.982101 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.092585 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.094735 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.098490 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.101575 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.229909 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.286612 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.311101 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.330390 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.341023 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.433228 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.508660 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.594314 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.663077 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.667017 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.702466 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.767695 5119 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.767791 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.767855 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.768706 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"527df225d80b2fd89bff8e352d14b6bcb74d7760079479c84d33c46ecccdff52"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.768881 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://527df225d80b2fd89bff8e352d14b6bcb74d7760079479c84d33c46ecccdff52" gracePeriod=30 Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.869512 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.894141 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.981082 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 21 09:58:48 crc kubenswrapper[5119]: I0121 09:58:48.983694 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.093816 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.099805 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.114811 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.166443 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.214097 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.216015 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.221343 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.221485 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.303574 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.306057 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.322506 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.343970 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.385833 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.387007 5119 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.422986 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.479498 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.499134 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.836762 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.910393 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.922189 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.922279 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:58:49 crc kubenswrapper[5119]: I0121 09:58:49.933415 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.060471 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.100292 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.163385 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.191109 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.252483 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.286123 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.337193 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.481471 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.500223 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.514142 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.526691 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.622832 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.653174 5119 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.660110 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.660177 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.668240 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.684330 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.684316055 podStartE2EDuration="20.684316055s" podCreationTimestamp="2026-01-21 09:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:58:50.683271447 +0000 UTC m=+246.351363125" watchObservedRunningTime="2026-01-21 09:58:50.684316055 +0000 UTC m=+246.352407733" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.685141 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.727104 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.776028 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.814834 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.815179 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.841667 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.859113 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.935703 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.950363 5119 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.961561 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 21 09:58:50 crc kubenswrapper[5119]: I0121 09:58:50.990542 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.048254 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.058348 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.159058 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.165839 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.266811 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.284490 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.357425 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.381326 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.452768 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.494658 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.505044 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.517900 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.556978 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.654577 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.665123 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.666851 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.685532 5119 ???:1] "http: TLS handshake error from 192.168.126.11:47208: no serving certificate available for the kubelet" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.729860 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.733935 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.860894 5119 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.861190 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://3fdc02fa7f3761265d64ca08d5126bb335077089104d5dd63eee678d0fdc458c" gracePeriod=5 Jan 21 09:58:51 crc kubenswrapper[5119]: I0121 09:58:51.963659 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 21 09:58:52 crc kubenswrapper[5119]: I0121 09:58:52.074839 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 21 09:58:52 crc kubenswrapper[5119]: I0121 09:58:52.107156 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 21 09:58:52 crc kubenswrapper[5119]: I0121 09:58:52.202368 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 21 09:58:52 crc kubenswrapper[5119]: I0121 09:58:52.266145 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 21 09:58:52 crc kubenswrapper[5119]: I0121 09:58:52.338183 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 21 09:58:52 crc kubenswrapper[5119]: I0121 09:58:52.581891 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 21 09:58:52 crc kubenswrapper[5119]: I0121 09:58:52.621693 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 21 09:58:52 crc kubenswrapper[5119]: I0121 09:58:52.743212 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 21 09:58:52 crc kubenswrapper[5119]: I0121 09:58:52.779208 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.045326 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.053067 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.116638 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.155366 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.209163 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.248206 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.249372 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.270266 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.303893 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.377057 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.499590 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.513164 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.516182 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.516348 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.519508 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.786282 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.851457 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.889549 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 21 09:58:53 crc kubenswrapper[5119]: I0121 09:58:53.899071 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.000867 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.007436 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.053697 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.057342 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.150813 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.179131 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.252386 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.355693 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.369169 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.386979 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.490654 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.497990 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.630905 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.640537 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 21 09:58:54 crc kubenswrapper[5119]: I0121 09:58:54.945249 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:55 crc kubenswrapper[5119]: I0121 09:58:55.119433 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 21 09:58:55 crc kubenswrapper[5119]: I0121 09:58:55.120381 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 21 09:58:55 crc kubenswrapper[5119]: I0121 09:58:55.321347 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 21 09:58:55 crc kubenswrapper[5119]: I0121 09:58:55.482097 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 21 09:58:55 crc kubenswrapper[5119]: I0121 09:58:55.509591 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 21 09:58:55 crc kubenswrapper[5119]: I0121 09:58:55.532174 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 21 09:58:55 crc kubenswrapper[5119]: I0121 09:58:55.609212 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 21 09:58:55 crc kubenswrapper[5119]: I0121 09:58:55.880880 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 21 09:58:55 crc kubenswrapper[5119]: I0121 09:58:55.988263 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 09:58:55 crc kubenswrapper[5119]: I0121 09:58:55.989381 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:56 crc kubenswrapper[5119]: I0121 09:58:56.283050 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 21 09:58:56 crc kubenswrapper[5119]: I0121 09:58:56.333239 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 21 09:58:56 crc kubenswrapper[5119]: I0121 09:58:56.406929 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:58:56 crc kubenswrapper[5119]: I0121 09:58:56.420333 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 21 09:58:56 crc kubenswrapper[5119]: I0121 09:58:56.467203 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 21 09:58:56 crc kubenswrapper[5119]: I0121 09:58:56.618916 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 21 09:58:56 crc kubenswrapper[5119]: I0121 09:58:56.873982 5119 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:58:56 crc kubenswrapper[5119]: I0121 09:58:56.983040 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.129153 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.333460 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.333516 5119 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="3fdc02fa7f3761265d64ca08d5126bb335077089104d5dd63eee678d0fdc458c" exitCode=137 Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.434809 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.434898 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.436507 5119 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.546762 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.546830 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.546942 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.546982 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.547027 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.547079 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.547139 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.547180 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.547220 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.547425 5119 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.547448 5119 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.547460 5119 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.547472 5119 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.555250 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.648922 5119 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.738566 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.864152 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 21 09:58:57 crc kubenswrapper[5119]: I0121 09:58:57.881536 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 21 09:58:58 crc kubenswrapper[5119]: I0121 09:58:58.121165 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 21 09:58:58 crc kubenswrapper[5119]: I0121 09:58:58.335049 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 21 09:58:58 crc kubenswrapper[5119]: I0121 09:58:58.339804 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 21 09:58:58 crc kubenswrapper[5119]: I0121 09:58:58.339941 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:58:58 crc kubenswrapper[5119]: I0121 09:58:58.340013 5119 scope.go:117] "RemoveContainer" containerID="3fdc02fa7f3761265d64ca08d5126bb335077089104d5dd63eee678d0fdc458c" Jan 21 09:58:58 crc kubenswrapper[5119]: I0121 09:58:58.348329 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 21 09:58:58 crc kubenswrapper[5119]: I0121 09:58:58.368062 5119 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 21 09:58:58 crc kubenswrapper[5119]: I0121 09:58:58.401875 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 21 09:58:58 crc kubenswrapper[5119]: I0121 09:58:58.572539 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 21 09:58:58 crc kubenswrapper[5119]: I0121 09:58:58.600448 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 21 09:59:00 crc kubenswrapper[5119]: I0121 09:59:00.169095 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 21 09:59:14 crc kubenswrapper[5119]: I0121 09:59:14.402229 5119 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-z67hs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 21 09:59:14 crc kubenswrapper[5119]: I0121 09:59:14.402734 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" podUID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 21 09:59:14 crc kubenswrapper[5119]: I0121 09:59:14.432549 5119 generic.go:358] "Generic (PLEG): container finished" podID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" containerID="f2978d9abe5f90f85e007be87d1d5fea81e25ebc40c2677099504fb575e11a7b" exitCode=0 Jan 21 09:59:14 crc kubenswrapper[5119]: I0121 09:59:14.432635 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" event={"ID":"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151","Type":"ContainerDied","Data":"f2978d9abe5f90f85e007be87d1d5fea81e25ebc40c2677099504fb575e11a7b"} Jan 21 09:59:14 crc kubenswrapper[5119]: I0121 09:59:14.433317 5119 scope.go:117] "RemoveContainer" containerID="f2978d9abe5f90f85e007be87d1d5fea81e25ebc40c2677099504fb575e11a7b" Jan 21 09:59:15 crc kubenswrapper[5119]: I0121 09:59:15.439865 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" event={"ID":"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151","Type":"ContainerStarted","Data":"042dad7afbbc3c02318f0afc2aeada39b2c54f93a5637fab14247ccb054155e9"} Jan 21 09:59:15 crc kubenswrapper[5119]: I0121 09:59:15.441427 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:59:15 crc kubenswrapper[5119]: I0121 09:59:15.443583 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:59:19 crc kubenswrapper[5119]: I0121 09:59:19.463371 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 09:59:19 crc kubenswrapper[5119]: I0121 09:59:19.465765 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:59:19 crc kubenswrapper[5119]: I0121 09:59:19.465806 5119 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="527df225d80b2fd89bff8e352d14b6bcb74d7760079479c84d33c46ecccdff52" exitCode=137 Jan 21 09:59:19 crc kubenswrapper[5119]: I0121 09:59:19.465985 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"527df225d80b2fd89bff8e352d14b6bcb74d7760079479c84d33c46ecccdff52"} Jan 21 09:59:19 crc kubenswrapper[5119]: I0121 09:59:19.466066 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"0739ff2ce925e40ed3c27fc5b7dbeed0bb3b4080fdc42b6a247eb44bae8570c2"} Jan 21 09:59:19 crc kubenswrapper[5119]: I0121 09:59:19.466099 5119 scope.go:117] "RemoveContainer" containerID="6a96bc82968554f4963af69b0650f31ed23187fc0611ce4a942afca605f17b35" Jan 21 09:59:19 crc kubenswrapper[5119]: I0121 09:59:19.919745 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:59:19 crc kubenswrapper[5119]: I0121 09:59:19.920061 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:59:20 crc kubenswrapper[5119]: I0121 09:59:20.471946 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 09:59:22 crc kubenswrapper[5119]: I0121 09:59:22.651283 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:59:23 crc kubenswrapper[5119]: I0121 09:59:23.539901 5119 ???:1] "http: TLS handshake error from 192.168.126.11:53902: no serving certificate available for the kubelet" Jan 21 09:59:28 crc kubenswrapper[5119]: I0121 09:59:28.767052 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:59:28 crc kubenswrapper[5119]: I0121 09:59:28.772875 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:59:29 crc kubenswrapper[5119]: I0121 09:59:29.529802 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.408144 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w7cjs"] Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.408943 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w7cjs" podUID="49467157-6fc6-4f0b-b833-1b95a6068d7e" containerName="registry-server" containerID="cri-o://0a0f44308b64040e047bc42fe33b01e052a666bbfb72e0335010ac1b947ae5bc" gracePeriod=30 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.418640 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gxxh"] Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.418927 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9gxxh" podUID="9bccb111-fc78-420c-bb88-788974b0d7d5" containerName="registry-server" containerID="cri-o://b3074768bedee2cfb8e3b551c1bb3fb78486f073ff0a66549be988a2d905b70a" gracePeriod=30 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.428165 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z67hs"] Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.428424 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" podUID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" containerName="marketplace-operator" containerID="cri-o://042dad7afbbc3c02318f0afc2aeada39b2c54f93a5637fab14247ccb054155e9" gracePeriod=30 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.435912 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4wln"] Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.436202 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j4wln" podUID="07776dee-a157-4c69-ae94-c63a101a84f2" containerName="registry-server" containerID="cri-o://4a490e997495f494203b80cf802d7aafd4cf0eded86c24f57ca9373e175c06e9" gracePeriod=30 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.448172 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nmdb"] Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.448666 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9nmdb" podUID="6783a1d3-549e-4077-9898-723d2984e451" containerName="registry-server" containerID="cri-o://4f713b6d31a4cc4ab01666fdccc55d242f87aef51b86b5be3206bdfcc800b9e2" gracePeriod=30 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.482401 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k595m"] Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.483088 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" containerName="installer" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.483178 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" containerName="installer" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.483266 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.483322 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.483480 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="5058f05d-8a72-417d-9207-5d43f75e61ac" containerName="installer" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.483547 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.497924 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k595m"] Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.498092 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.499128 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-gngm4"] Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.499402 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" podUID="ee0294ff-f61f-492b-b738-fbbee8f757eb" containerName="controller-manager" containerID="cri-o://49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a" gracePeriod=30 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.501976 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98"] Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.502274 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" podUID="d3c70e39-bf38-42a7-b579-ed17a163a5b1" containerName="route-controller-manager" containerID="cri-o://a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b" gracePeriod=30 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.510697 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7tls5"] Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.636341 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b28083dd-7140-4978-9f2e-492904f94465-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.636693 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28083dd-7140-4978-9f2e-492904f94465-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.636756 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b28083dd-7140-4978-9f2e-492904f94465-tmp\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.636772 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr6dt\" (UniqueName: \"kubernetes.io/projected/b28083dd-7140-4978-9f2e-492904f94465-kube-api-access-dr6dt\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.682886 5119 generic.go:358] "Generic (PLEG): container finished" podID="07776dee-a157-4c69-ae94-c63a101a84f2" containerID="4a490e997495f494203b80cf802d7aafd4cf0eded86c24f57ca9373e175c06e9" exitCode=0 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.683005 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4wln" event={"ID":"07776dee-a157-4c69-ae94-c63a101a84f2","Type":"ContainerDied","Data":"4a490e997495f494203b80cf802d7aafd4cf0eded86c24f57ca9373e175c06e9"} Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.704910 5119 generic.go:358] "Generic (PLEG): container finished" podID="9bccb111-fc78-420c-bb88-788974b0d7d5" containerID="b3074768bedee2cfb8e3b551c1bb3fb78486f073ff0a66549be988a2d905b70a" exitCode=0 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.705019 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxxh" event={"ID":"9bccb111-fc78-420c-bb88-788974b0d7d5","Type":"ContainerDied","Data":"b3074768bedee2cfb8e3b551c1bb3fb78486f073ff0a66549be988a2d905b70a"} Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.718131 5119 generic.go:358] "Generic (PLEG): container finished" podID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" containerID="042dad7afbbc3c02318f0afc2aeada39b2c54f93a5637fab14247ccb054155e9" exitCode=0 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.718214 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" event={"ID":"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151","Type":"ContainerDied","Data":"042dad7afbbc3c02318f0afc2aeada39b2c54f93a5637fab14247ccb054155e9"} Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.718246 5119 scope.go:117] "RemoveContainer" containerID="f2978d9abe5f90f85e007be87d1d5fea81e25ebc40c2677099504fb575e11a7b" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.729553 5119 generic.go:358] "Generic (PLEG): container finished" podID="49467157-6fc6-4f0b-b833-1b95a6068d7e" containerID="0a0f44308b64040e047bc42fe33b01e052a666bbfb72e0335010ac1b947ae5bc" exitCode=0 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.729735 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cjs" event={"ID":"49467157-6fc6-4f0b-b833-1b95a6068d7e","Type":"ContainerDied","Data":"0a0f44308b64040e047bc42fe33b01e052a666bbfb72e0335010ac1b947ae5bc"} Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.735171 5119 generic.go:358] "Generic (PLEG): container finished" podID="6783a1d3-549e-4077-9898-723d2984e451" containerID="4f713b6d31a4cc4ab01666fdccc55d242f87aef51b86b5be3206bdfcc800b9e2" exitCode=0 Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.735254 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmdb" event={"ID":"6783a1d3-549e-4077-9898-723d2984e451","Type":"ContainerDied","Data":"4f713b6d31a4cc4ab01666fdccc55d242f87aef51b86b5be3206bdfcc800b9e2"} Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.738663 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b28083dd-7140-4978-9f2e-492904f94465-tmp\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.738699 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dr6dt\" (UniqueName: \"kubernetes.io/projected/b28083dd-7140-4978-9f2e-492904f94465-kube-api-access-dr6dt\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.738760 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b28083dd-7140-4978-9f2e-492904f94465-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.738808 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28083dd-7140-4978-9f2e-492904f94465-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.739289 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b28083dd-7140-4978-9f2e-492904f94465-tmp\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.740361 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b28083dd-7140-4978-9f2e-492904f94465-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.745656 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28083dd-7140-4978-9f2e-492904f94465-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.782428 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr6dt\" (UniqueName: \"kubernetes.io/projected/b28083dd-7140-4978-9f2e-492904f94465-kube-api-access-dr6dt\") pod \"marketplace-operator-547dbd544d-k595m\" (UID: \"b28083dd-7140-4978-9f2e-492904f94465\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.883890 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:44 crc kubenswrapper[5119]: I0121 09:59:44.971754 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.007764 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.022453 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.024325 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.049147 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.054234 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.059530 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.115661 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f58bd647d-mnl25"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116258 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ee0294ff-f61f-492b-b738-fbbee8f757eb" containerName="controller-manager" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116274 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0294ff-f61f-492b-b738-fbbee8f757eb" containerName="controller-manager" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116286 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07776dee-a157-4c69-ae94-c63a101a84f2" containerName="extract-utilities" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116292 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="07776dee-a157-4c69-ae94-c63a101a84f2" containerName="extract-utilities" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116302 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49467157-6fc6-4f0b-b833-1b95a6068d7e" containerName="extract-utilities" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116308 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="49467157-6fc6-4f0b-b833-1b95a6068d7e" containerName="extract-utilities" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116321 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3c70e39-bf38-42a7-b579-ed17a163a5b1" containerName="route-controller-manager" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116326 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c70e39-bf38-42a7-b579-ed17a163a5b1" containerName="route-controller-manager" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116333 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" containerName="marketplace-operator" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116338 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" containerName="marketplace-operator" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116353 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" containerName="marketplace-operator" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116359 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" containerName="marketplace-operator" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116370 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49467157-6fc6-4f0b-b833-1b95a6068d7e" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116375 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="49467157-6fc6-4f0b-b833-1b95a6068d7e" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116391 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07776dee-a157-4c69-ae94-c63a101a84f2" containerName="extract-content" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116397 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="07776dee-a157-4c69-ae94-c63a101a84f2" containerName="extract-content" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116405 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07776dee-a157-4c69-ae94-c63a101a84f2" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116410 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="07776dee-a157-4c69-ae94-c63a101a84f2" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116421 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49467157-6fc6-4f0b-b833-1b95a6068d7e" containerName="extract-content" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116427 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="49467157-6fc6-4f0b-b833-1b95a6068d7e" containerName="extract-content" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116519 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="07776dee-a157-4c69-ae94-c63a101a84f2" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116530 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" containerName="marketplace-operator" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116538 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="49467157-6fc6-4f0b-b833-1b95a6068d7e" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116545 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="ee0294ff-f61f-492b-b738-fbbee8f757eb" containerName="controller-manager" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.116554 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="d3c70e39-bf38-42a7-b579-ed17a163a5b1" containerName="route-controller-manager" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.119935 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.128201 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f58bd647d-mnl25"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.128340 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.141708 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158700 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-utilities\") pod \"07776dee-a157-4c69-ae94-c63a101a84f2\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158735 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-config\") pod \"ee0294ff-f61f-492b-b738-fbbee8f757eb\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158770 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-trusted-ca\") pod \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158793 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-catalog-content\") pod \"49467157-6fc6-4f0b-b833-1b95a6068d7e\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158826 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-utilities\") pod \"6783a1d3-549e-4077-9898-723d2984e451\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158852 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee0294ff-f61f-492b-b738-fbbee8f757eb-tmp\") pod \"ee0294ff-f61f-492b-b738-fbbee8f757eb\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158871 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-catalog-content\") pod \"6783a1d3-549e-4077-9898-723d2984e451\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158888 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3c70e39-bf38-42a7-b579-ed17a163a5b1-serving-cert\") pod \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158903 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee0294ff-f61f-492b-b738-fbbee8f757eb-serving-cert\") pod \"ee0294ff-f61f-492b-b738-fbbee8f757eb\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158921 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vswtq\" (UniqueName: \"kubernetes.io/projected/49467157-6fc6-4f0b-b833-1b95a6068d7e-kube-api-access-vswtq\") pod \"49467157-6fc6-4f0b-b833-1b95a6068d7e\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158938 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljf6z\" (UniqueName: \"kubernetes.io/projected/9bccb111-fc78-420c-bb88-788974b0d7d5-kube-api-access-ljf6z\") pod \"9bccb111-fc78-420c-bb88-788974b0d7d5\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158953 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6xww\" (UniqueName: \"kubernetes.io/projected/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-kube-api-access-p6xww\") pod \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.158979 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clhbh\" (UniqueName: \"kubernetes.io/projected/ee0294ff-f61f-492b-b738-fbbee8f757eb-kube-api-access-clhbh\") pod \"ee0294ff-f61f-492b-b738-fbbee8f757eb\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159004 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3c70e39-bf38-42a7-b579-ed17a163a5b1-tmp\") pod \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159029 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-operator-metrics\") pod \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159045 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgxcw\" (UniqueName: \"kubernetes.io/projected/6783a1d3-549e-4077-9898-723d2984e451-kube-api-access-xgxcw\") pod \"6783a1d3-549e-4077-9898-723d2984e451\" (UID: \"6783a1d3-549e-4077-9898-723d2984e451\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159063 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvrkq\" (UniqueName: \"kubernetes.io/projected/d3c70e39-bf38-42a7-b579-ed17a163a5b1-kube-api-access-xvrkq\") pod \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159078 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-client-ca\") pod \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159096 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-utilities\") pod \"9bccb111-fc78-420c-bb88-788974b0d7d5\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159114 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-tmp\") pod \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\" (UID: \"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159135 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-utilities\") pod \"49467157-6fc6-4f0b-b833-1b95a6068d7e\" (UID: \"49467157-6fc6-4f0b-b833-1b95a6068d7e\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159152 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-catalog-content\") pod \"07776dee-a157-4c69-ae94-c63a101a84f2\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159186 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-proxy-ca-bundles\") pod \"ee0294ff-f61f-492b-b738-fbbee8f757eb\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159209 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l248k\" (UniqueName: \"kubernetes.io/projected/07776dee-a157-4c69-ae94-c63a101a84f2-kube-api-access-l248k\") pod \"07776dee-a157-4c69-ae94-c63a101a84f2\" (UID: \"07776dee-a157-4c69-ae94-c63a101a84f2\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159227 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-client-ca\") pod \"ee0294ff-f61f-492b-b738-fbbee8f757eb\" (UID: \"ee0294ff-f61f-492b-b738-fbbee8f757eb\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159241 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-config\") pod \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\" (UID: \"d3c70e39-bf38-42a7-b579-ed17a163a5b1\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159259 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-catalog-content\") pod \"9bccb111-fc78-420c-bb88-788974b0d7d5\" (UID: \"9bccb111-fc78-420c-bb88-788974b0d7d5\") " Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159337 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09bea86-0cdf-4e61-94b7-7231e3aced57-config\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159357 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f09bea86-0cdf-4e61-94b7-7231e3aced57-client-ca\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159394 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f09bea86-0cdf-4e61-94b7-7231e3aced57-serving-cert\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159429 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f09bea86-0cdf-4e61-94b7-7231e3aced57-proxy-ca-bundles\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159480 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f09bea86-0cdf-4e61-94b7-7231e3aced57-tmp\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.159502 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47qhv\" (UniqueName: \"kubernetes.io/projected/f09bea86-0cdf-4e61-94b7-7231e3aced57-kube-api-access-47qhv\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.160521 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-client-ca" (OuterVolumeSpecName: "client-ca") pod "ee0294ff-f61f-492b-b738-fbbee8f757eb" (UID: "ee0294ff-f61f-492b-b738-fbbee8f757eb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.160746 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-utilities" (OuterVolumeSpecName: "utilities") pod "07776dee-a157-4c69-ae94-c63a101a84f2" (UID: "07776dee-a157-4c69-ae94-c63a101a84f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.161341 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-config" (OuterVolumeSpecName: "config") pod "d3c70e39-bf38-42a7-b579-ed17a163a5b1" (UID: "d3c70e39-bf38-42a7-b579-ed17a163a5b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.161693 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ee0294ff-f61f-492b-b738-fbbee8f757eb" (UID: "ee0294ff-f61f-492b-b738-fbbee8f757eb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.164017 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-utilities" (OuterVolumeSpecName: "utilities") pod "9bccb111-fc78-420c-bb88-788974b0d7d5" (UID: "9bccb111-fc78-420c-bb88-788974b0d7d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.166671 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "d3c70e39-bf38-42a7-b579-ed17a163a5b1" (UID: "d3c70e39-bf38-42a7-b579-ed17a163a5b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.169455 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-utilities" (OuterVolumeSpecName: "utilities") pod "49467157-6fc6-4f0b-b833-1b95a6068d7e" (UID: "49467157-6fc6-4f0b-b833-1b95a6068d7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.174480 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-config" (OuterVolumeSpecName: "config") pod "ee0294ff-f61f-492b-b738-fbbee8f757eb" (UID: "ee0294ff-f61f-492b-b738-fbbee8f757eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.179028 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3c70e39-bf38-42a7-b579-ed17a163a5b1-tmp" (OuterVolumeSpecName: "tmp") pod "d3c70e39-bf38-42a7-b579-ed17a163a5b1" (UID: "d3c70e39-bf38-42a7-b579-ed17a163a5b1"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.179797 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0294ff-f61f-492b-b738-fbbee8f757eb-tmp" (OuterVolumeSpecName: "tmp") pod "ee0294ff-f61f-492b-b738-fbbee8f757eb" (UID: "ee0294ff-f61f-492b-b738-fbbee8f757eb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180262 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" (UID: "f5a878c2-9a7b-4d34-a9ee-28bdd05d3151"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180310 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180810 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bccb111-fc78-420c-bb88-788974b0d7d5" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180821 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bccb111-fc78-420c-bb88-788974b0d7d5" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180838 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6783a1d3-549e-4077-9898-723d2984e451" containerName="extract-content" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180844 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6783a1d3-549e-4077-9898-723d2984e451" containerName="extract-content" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180855 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bccb111-fc78-420c-bb88-788974b0d7d5" containerName="extract-content" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180861 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bccb111-fc78-420c-bb88-788974b0d7d5" containerName="extract-content" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180871 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bccb111-fc78-420c-bb88-788974b0d7d5" containerName="extract-utilities" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180876 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bccb111-fc78-420c-bb88-788974b0d7d5" containerName="extract-utilities" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180889 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6783a1d3-549e-4077-9898-723d2984e451" containerName="extract-utilities" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180894 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6783a1d3-549e-4077-9898-723d2984e451" containerName="extract-utilities" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180902 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6783a1d3-549e-4077-9898-723d2984e451" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180907 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6783a1d3-549e-4077-9898-723d2984e451" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180986 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="6783a1d3-549e-4077-9898-723d2984e451" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.180997 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bccb111-fc78-420c-bb88-788974b0d7d5" containerName="registry-server" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.181015 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" containerName="marketplace-operator" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.181772 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-utilities" (OuterVolumeSpecName: "utilities") pod "6783a1d3-549e-4077-9898-723d2984e451" (UID: "6783a1d3-549e-4077-9898-723d2984e451"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.202086 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.212352 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c70e39-bf38-42a7-b579-ed17a163a5b1-kube-api-access-xvrkq" (OuterVolumeSpecName: "kube-api-access-xvrkq") pod "d3c70e39-bf38-42a7-b579-ed17a163a5b1" (UID: "d3c70e39-bf38-42a7-b579-ed17a163a5b1"). InnerVolumeSpecName "kube-api-access-xvrkq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.224464 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07776dee-a157-4c69-ae94-c63a101a84f2" (UID: "07776dee-a157-4c69-ae94-c63a101a84f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.226449 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.230573 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0294ff-f61f-492b-b738-fbbee8f757eb-kube-api-access-clhbh" (OuterVolumeSpecName: "kube-api-access-clhbh") pod "ee0294ff-f61f-492b-b738-fbbee8f757eb" (UID: "ee0294ff-f61f-492b-b738-fbbee8f757eb"). InnerVolumeSpecName "kube-api-access-clhbh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.232715 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6783a1d3-549e-4077-9898-723d2984e451-kube-api-access-xgxcw" (OuterVolumeSpecName: "kube-api-access-xgxcw") pod "6783a1d3-549e-4077-9898-723d2984e451" (UID: "6783a1d3-549e-4077-9898-723d2984e451"). InnerVolumeSpecName "kube-api-access-xgxcw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.239435 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07776dee-a157-4c69-ae94-c63a101a84f2-kube-api-access-l248k" (OuterVolumeSpecName: "kube-api-access-l248k") pod "07776dee-a157-4c69-ae94-c63a101a84f2" (UID: "07776dee-a157-4c69-ae94-c63a101a84f2"). InnerVolumeSpecName "kube-api-access-l248k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.239476 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49467157-6fc6-4f0b-b833-1b95a6068d7e-kube-api-access-vswtq" (OuterVolumeSpecName: "kube-api-access-vswtq") pod "49467157-6fc6-4f0b-b833-1b95a6068d7e" (UID: "49467157-6fc6-4f0b-b833-1b95a6068d7e"). InnerVolumeSpecName "kube-api-access-vswtq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.239789 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bccb111-fc78-420c-bb88-788974b0d7d5-kube-api-access-ljf6z" (OuterVolumeSpecName: "kube-api-access-ljf6z") pod "9bccb111-fc78-420c-bb88-788974b0d7d5" (UID: "9bccb111-fc78-420c-bb88-788974b0d7d5"). InnerVolumeSpecName "kube-api-access-ljf6z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.244469 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49467157-6fc6-4f0b-b833-1b95a6068d7e" (UID: "49467157-6fc6-4f0b-b833-1b95a6068d7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.245230 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-kube-api-access-p6xww" (OuterVolumeSpecName: "kube-api-access-p6xww") pod "f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" (UID: "f5a878c2-9a7b-4d34-a9ee-28bdd05d3151"). InnerVolumeSpecName "kube-api-access-p6xww". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.247751 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c70e39-bf38-42a7-b579-ed17a163a5b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d3c70e39-bf38-42a7-b579-ed17a163a5b1" (UID: "d3c70e39-bf38-42a7-b579-ed17a163a5b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.247839 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0294ff-f61f-492b-b738-fbbee8f757eb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ee0294ff-f61f-492b-b738-fbbee8f757eb" (UID: "ee0294ff-f61f-492b-b738-fbbee8f757eb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.251579 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" (UID: "f5a878c2-9a7b-4d34-a9ee-28bdd05d3151"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.255665 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-tmp" (OuterVolumeSpecName: "tmp") pod "f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" (UID: "f5a878c2-9a7b-4d34-a9ee-28bdd05d3151"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260548 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-47qhv\" (UniqueName: \"kubernetes.io/projected/f09bea86-0cdf-4e61-94b7-7231e3aced57-kube-api-access-47qhv\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260642 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09bea86-0cdf-4e61-94b7-7231e3aced57-config\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260666 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f09bea86-0cdf-4e61-94b7-7231e3aced57-client-ca\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260695 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f09bea86-0cdf-4e61-94b7-7231e3aced57-serving-cert\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260721 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f09bea86-0cdf-4e61-94b7-7231e3aced57-proxy-ca-bundles\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260763 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f09bea86-0cdf-4e61-94b7-7231e3aced57-tmp\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260801 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260811 5119 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260820 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l248k\" (UniqueName: \"kubernetes.io/projected/07776dee-a157-4c69-ae94-c63a101a84f2-kube-api-access-l248k\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260832 5119 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260840 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260849 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07776dee-a157-4c69-ae94-c63a101a84f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260858 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0294ff-f61f-492b-b738-fbbee8f757eb-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260866 5119 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260874 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260882 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260891 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee0294ff-f61f-492b-b738-fbbee8f757eb-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260898 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3c70e39-bf38-42a7-b579-ed17a163a5b1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260907 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee0294ff-f61f-492b-b738-fbbee8f757eb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260916 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vswtq\" (UniqueName: \"kubernetes.io/projected/49467157-6fc6-4f0b-b833-1b95a6068d7e-kube-api-access-vswtq\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260924 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ljf6z\" (UniqueName: \"kubernetes.io/projected/9bccb111-fc78-420c-bb88-788974b0d7d5-kube-api-access-ljf6z\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260933 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p6xww\" (UniqueName: \"kubernetes.io/projected/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-kube-api-access-p6xww\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260942 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-clhbh\" (UniqueName: \"kubernetes.io/projected/ee0294ff-f61f-492b-b738-fbbee8f757eb-kube-api-access-clhbh\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260950 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3c70e39-bf38-42a7-b579-ed17a163a5b1-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260958 5119 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260967 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xgxcw\" (UniqueName: \"kubernetes.io/projected/6783a1d3-549e-4077-9898-723d2984e451-kube-api-access-xgxcw\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260977 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xvrkq\" (UniqueName: \"kubernetes.io/projected/d3c70e39-bf38-42a7-b579-ed17a163a5b1-kube-api-access-xvrkq\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260984 5119 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3c70e39-bf38-42a7-b579-ed17a163a5b1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.260994 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.261001 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.261009 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49467157-6fc6-4f0b-b833-1b95a6068d7e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.261469 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f09bea86-0cdf-4e61-94b7-7231e3aced57-tmp\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.263532 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f09bea86-0cdf-4e61-94b7-7231e3aced57-client-ca\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.263893 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f09bea86-0cdf-4e61-94b7-7231e3aced57-proxy-ca-bundles\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.266023 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f09bea86-0cdf-4e61-94b7-7231e3aced57-serving-cert\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.272955 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09bea86-0cdf-4e61-94b7-7231e3aced57-config\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.277767 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-47qhv\" (UniqueName: \"kubernetes.io/projected/f09bea86-0cdf-4e61-94b7-7231e3aced57-kube-api-access-47qhv\") pod \"controller-manager-f58bd647d-mnl25\" (UID: \"f09bea86-0cdf-4e61-94b7-7231e3aced57\") " pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.310901 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bccb111-fc78-420c-bb88-788974b0d7d5" (UID: "9bccb111-fc78-420c-bb88-788974b0d7d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.351794 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6783a1d3-549e-4077-9898-723d2984e451" (UID: "6783a1d3-549e-4077-9898-723d2984e451"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.362901 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afe35b85-6f80-41f6-8ca1-62fe862e7eee-tmp\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.362961 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-client-ca\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.362998 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe35b85-6f80-41f6-8ca1-62fe862e7eee-serving-cert\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.363069 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6f9h\" (UniqueName: \"kubernetes.io/projected/afe35b85-6f80-41f6-8ca1-62fe862e7eee-kube-api-access-r6f9h\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.363199 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-config\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.363315 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bccb111-fc78-420c-bb88-788974b0d7d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.363341 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6783a1d3-549e-4077-9898-723d2984e451-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.464543 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe35b85-6f80-41f6-8ca1-62fe862e7eee-serving-cert\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.464669 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r6f9h\" (UniqueName: \"kubernetes.io/projected/afe35b85-6f80-41f6-8ca1-62fe862e7eee-kube-api-access-r6f9h\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.464718 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-config\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.464748 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afe35b85-6f80-41f6-8ca1-62fe862e7eee-tmp\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.464775 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-client-ca\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.465529 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-client-ca\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.466123 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afe35b85-6f80-41f6-8ca1-62fe862e7eee-tmp\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.466725 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-config\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.468550 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe35b85-6f80-41f6-8ca1-62fe862e7eee-serving-cert\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.479543 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6f9h\" (UniqueName: \"kubernetes.io/projected/afe35b85-6f80-41f6-8ca1-62fe862e7eee-kube-api-access-r6f9h\") pod \"route-controller-manager-5c8f4bdfb-p2wqc\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.514273 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.569676 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.614842 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-k595m"] Jan 21 09:59:45 crc kubenswrapper[5119]: W0121 09:59:45.627733 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb28083dd_7140_4978_9f2e_492904f94465.slice/crio-69ba508d95b5e5e635414e251639bdc29e6a5c01ec15fb12f48e7116a7973567 WatchSource:0}: Error finding container 69ba508d95b5e5e635414e251639bdc29e6a5c01ec15fb12f48e7116a7973567: Status 404 returned error can't find the container with id 69ba508d95b5e5e635414e251639bdc29e6a5c01ec15fb12f48e7116a7973567 Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.630310 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.747191 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4wln" event={"ID":"07776dee-a157-4c69-ae94-c63a101a84f2","Type":"ContainerDied","Data":"31f2c489a389f57e68b8247650280e6151340a6054fa0009f288e360a9937958"} Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.747242 5119 scope.go:117] "RemoveContainer" containerID="4a490e997495f494203b80cf802d7aafd4cf0eded86c24f57ca9373e175c06e9" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.747351 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4wln" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.760533 5119 generic.go:358] "Generic (PLEG): container finished" podID="ee0294ff-f61f-492b-b738-fbbee8f757eb" containerID="49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a" exitCode=0 Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.760812 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" event={"ID":"ee0294ff-f61f-492b-b738-fbbee8f757eb","Type":"ContainerDied","Data":"49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a"} Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.760852 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" event={"ID":"ee0294ff-f61f-492b-b738-fbbee8f757eb","Type":"ContainerDied","Data":"b9bb6bb6502b2c156756fe1f28b9594041f889a6f90a1de6d8de4c4f64050de3"} Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.760958 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-gngm4" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.769027 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxxh" event={"ID":"9bccb111-fc78-420c-bb88-788974b0d7d5","Type":"ContainerDied","Data":"a6a15d5a0fb4bc6d9aeba8ffba44cd8888b405ae75e1655106a56e71116c9e11"} Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.772856 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gxxh" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.775684 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f58bd647d-mnl25"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.779174 5119 generic.go:358] "Generic (PLEG): container finished" podID="d3c70e39-bf38-42a7-b579-ed17a163a5b1" containerID="a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b" exitCode=0 Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.779408 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.779718 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" event={"ID":"d3c70e39-bf38-42a7-b579-ed17a163a5b1","Type":"ContainerDied","Data":"a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b"} Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.780569 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98" event={"ID":"d3c70e39-bf38-42a7-b579-ed17a163a5b1","Type":"ContainerDied","Data":"0b01e0d2386874f8e8037c51db2faf56cb3cf6f009eb38c5398ae4339dd6f1f6"} Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.782722 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.782728 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z67hs" event={"ID":"f5a878c2-9a7b-4d34-a9ee-28bdd05d3151","Type":"ContainerDied","Data":"ede4d4bcb9213c78d956746bb7c74ce07144a4b58a24777380e7bb1478e64fe0"} Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.788921 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w7cjs" event={"ID":"49467157-6fc6-4f0b-b833-1b95a6068d7e","Type":"ContainerDied","Data":"ead29c4becaf9684ee661545200b94f74f075e9c4115d493b670e31d942033a3"} Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.789822 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w7cjs" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.791427 5119 scope.go:117] "RemoveContainer" containerID="60b8efe091becd04afa18c2bfb6b75ed99d5ccf66070f83f0e5824e86bc2a07a" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.793754 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmdb" event={"ID":"6783a1d3-549e-4077-9898-723d2984e451","Type":"ContainerDied","Data":"132d1b5afc13e23bff80fbcbeab9b77b73db19f2778c5360db8bc8070b1ff409"} Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.794025 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nmdb" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.797290 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" event={"ID":"b28083dd-7140-4978-9f2e-492904f94465","Type":"ContainerStarted","Data":"69ba508d95b5e5e635414e251639bdc29e6a5c01ec15fb12f48e7116a7973567"} Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.823825 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4wln"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.829138 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4wln"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.830573 5119 scope.go:117] "RemoveContainer" containerID="ed62d1924339add48c5dbe2f41b95894222cc02affa7a2b4a7102901a8671a9a" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.864353 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-gngm4"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.867989 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-gngm4"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.868888 5119 scope.go:117] "RemoveContainer" containerID="49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.888650 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z67hs"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.888705 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z67hs"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.905271 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.910336 5119 scope.go:117] "RemoveContainer" containerID="49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a" Jan 21 09:59:45 crc kubenswrapper[5119]: E0121 09:59:45.910754 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a\": container with ID starting with 49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a not found: ID does not exist" containerID="49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.910777 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a"} err="failed to get container status \"49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a\": rpc error: code = NotFound desc = could not find container \"49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a\": container with ID starting with 49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a not found: ID does not exist" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.910795 5119 scope.go:117] "RemoveContainer" containerID="b3074768bedee2cfb8e3b551c1bb3fb78486f073ff0a66549be988a2d905b70a" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.912413 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-vgx98"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.918108 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gxxh"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.921423 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9gxxh"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.927953 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nmdb"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.940484 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9nmdb"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.944279 5119 scope.go:117] "RemoveContainer" containerID="85f037ab7a03b908500fa78f0dc848eaa29467d2f55de9b2282c96b68f41b908" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.944612 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w7cjs"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.947279 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w7cjs"] Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.967137 5119 scope.go:117] "RemoveContainer" containerID="a2b760901412f181c1a3dc481ff1b0ff91507e3ff8ad6d53e6a932628b6bf771" Jan 21 09:59:45 crc kubenswrapper[5119]: I0121 09:59:45.986981 5119 scope.go:117] "RemoveContainer" containerID="a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.013479 5119 scope.go:117] "RemoveContainer" containerID="a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b" Jan 21 09:59:46 crc kubenswrapper[5119]: E0121 09:59:46.014012 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b\": container with ID starting with a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b not found: ID does not exist" containerID="a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.014059 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b"} err="failed to get container status \"a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b\": rpc error: code = NotFound desc = could not find container \"a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b\": container with ID starting with a7fba0330a98646af203f10d301d7c5346e6f74a1057cf333ae4391c410c415b not found: ID does not exist" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.014094 5119 scope.go:117] "RemoveContainer" containerID="042dad7afbbc3c02318f0afc2aeada39b2c54f93a5637fab14247ccb054155e9" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.037025 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc"] Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.044852 5119 scope.go:117] "RemoveContainer" containerID="0a0f44308b64040e047bc42fe33b01e052a666bbfb72e0335010ac1b947ae5bc" Jan 21 09:59:46 crc kubenswrapper[5119]: W0121 09:59:46.048117 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafe35b85_6f80_41f6_8ca1_62fe862e7eee.slice/crio-97daa0d3def6cc3a91013cbd48d8967372f9ed84968f1e630d20e06bc2167e82 WatchSource:0}: Error finding container 97daa0d3def6cc3a91013cbd48d8967372f9ed84968f1e630d20e06bc2167e82: Status 404 returned error can't find the container with id 97daa0d3def6cc3a91013cbd48d8967372f9ed84968f1e630d20e06bc2167e82 Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.068258 5119 scope.go:117] "RemoveContainer" containerID="d11d9a2abf3ee6a1d57e2bc700370b8d9789db08d1b83edd743f862fd93867bb" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.084539 5119 scope.go:117] "RemoveContainer" containerID="adc8977be75dc7f29782caedded6c37a9bf902f0ee05953fd79482207ca167a5" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.100217 5119 scope.go:117] "RemoveContainer" containerID="4f713b6d31a4cc4ab01666fdccc55d242f87aef51b86b5be3206bdfcc800b9e2" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.119088 5119 scope.go:117] "RemoveContainer" containerID="f0bca505eb9d5f3462a3843d1bb088cde07d143b82a1065cc24822eeaba09d34" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.141530 5119 scope.go:117] "RemoveContainer" containerID="5a5ebe736f916df53e36b4219a57af97cc83e9c8f259135cd06e97d45b428d30" Jan 21 09:59:46 crc kubenswrapper[5119]: E0121 09:59:46.367544 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee0294ff_f61f_492b_b738_fbbee8f757eb.slice/crio-49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a.scope\": RecentStats: unable to find data in memory cache]" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.420390 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-695sn"] Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.424429 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.426198 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.441442 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-695sn"] Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.476298 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-utilities\") pod \"redhat-marketplace-695sn\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.476372 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tzc9\" (UniqueName: \"kubernetes.io/projected/0f41c580-660a-421c-8be8-7ec588566fe5-kube-api-access-7tzc9\") pod \"redhat-marketplace-695sn\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.476409 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-catalog-content\") pod \"redhat-marketplace-695sn\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.576851 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-utilities\") pod \"redhat-marketplace-695sn\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.576926 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7tzc9\" (UniqueName: \"kubernetes.io/projected/0f41c580-660a-421c-8be8-7ec588566fe5-kube-api-access-7tzc9\") pod \"redhat-marketplace-695sn\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.576964 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-catalog-content\") pod \"redhat-marketplace-695sn\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.577448 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-utilities\") pod \"redhat-marketplace-695sn\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.577740 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-catalog-content\") pod \"redhat-marketplace-695sn\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.595560 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tzc9\" (UniqueName: \"kubernetes.io/projected/0f41c580-660a-421c-8be8-7ec588566fe5-kube-api-access-7tzc9\") pod \"redhat-marketplace-695sn\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.598366 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07776dee-a157-4c69-ae94-c63a101a84f2" path="/var/lib/kubelet/pods/07776dee-a157-4c69-ae94-c63a101a84f2/volumes" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.599823 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49467157-6fc6-4f0b-b833-1b95a6068d7e" path="/var/lib/kubelet/pods/49467157-6fc6-4f0b-b833-1b95a6068d7e/volumes" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.600592 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6783a1d3-549e-4077-9898-723d2984e451" path="/var/lib/kubelet/pods/6783a1d3-549e-4077-9898-723d2984e451/volumes" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.602196 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bccb111-fc78-420c-bb88-788974b0d7d5" path="/var/lib/kubelet/pods/9bccb111-fc78-420c-bb88-788974b0d7d5/volumes" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.603408 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c70e39-bf38-42a7-b579-ed17a163a5b1" path="/var/lib/kubelet/pods/d3c70e39-bf38-42a7-b579-ed17a163a5b1/volumes" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.606792 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0294ff-f61f-492b-b738-fbbee8f757eb" path="/var/lib/kubelet/pods/ee0294ff-f61f-492b-b738-fbbee8f757eb/volumes" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.607877 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5a878c2-9a7b-4d34-a9ee-28bdd05d3151" path="/var/lib/kubelet/pods/f5a878c2-9a7b-4d34-a9ee-28bdd05d3151/volumes" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.742277 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.805331 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" event={"ID":"f09bea86-0cdf-4e61-94b7-7231e3aced57","Type":"ContainerStarted","Data":"5e60acdff6e950eef430c8a603424caba774cf4754de60768e480b4347d595e0"} Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.805375 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" event={"ID":"f09bea86-0cdf-4e61-94b7-7231e3aced57","Type":"ContainerStarted","Data":"c6675a1efaf76fc898d03d43a3537c97d079ca3a7341e69f4243e4c213a563f5"} Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.806783 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.812511 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" event={"ID":"afe35b85-6f80-41f6-8ca1-62fe862e7eee","Type":"ContainerStarted","Data":"2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca"} Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.812547 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" event={"ID":"afe35b85-6f80-41f6-8ca1-62fe862e7eee","Type":"ContainerStarted","Data":"97daa0d3def6cc3a91013cbd48d8967372f9ed84968f1e630d20e06bc2167e82"} Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.813373 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.813594 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.817212 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" event={"ID":"b28083dd-7140-4978-9f2e-492904f94465","Type":"ContainerStarted","Data":"ddf3b074fcb62e22af131206364ebe412c91742a0e7458bbc57ad88f365c6054"} Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.817756 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.820713 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.821989 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f58bd647d-mnl25" podStartSLOduration=2.821970866 podStartE2EDuration="2.821970866s" podCreationTimestamp="2026-01-21 09:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:59:46.820392444 +0000 UTC m=+302.488484122" watchObservedRunningTime="2026-01-21 09:59:46.821970866 +0000 UTC m=+302.490062534" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.829393 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.841775 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-k595m" podStartSLOduration=2.8417572570000003 podStartE2EDuration="2.841757257s" podCreationTimestamp="2026-01-21 09:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:59:46.838092657 +0000 UTC m=+302.506184335" watchObservedRunningTime="2026-01-21 09:59:46.841757257 +0000 UTC m=+302.509848955" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.868764 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" podStartSLOduration=2.868748875 podStartE2EDuration="2.868748875s" podCreationTimestamp="2026-01-21 09:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:59:46.868099898 +0000 UTC m=+302.536191576" watchObservedRunningTime="2026-01-21 09:59:46.868748875 +0000 UTC m=+302.536840553" Jan 21 09:59:46 crc kubenswrapper[5119]: I0121 09:59:46.952470 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-695sn"] Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.020901 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8dcd6"] Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.028072 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.029045 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8dcd6"] Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.030475 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.081655 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e00889e-5f62-4c41-971c-f9ef4ed0d77e-utilities\") pod \"certified-operators-8dcd6\" (UID: \"8e00889e-5f62-4c41-971c-f9ef4ed0d77e\") " pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.081908 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e00889e-5f62-4c41-971c-f9ef4ed0d77e-catalog-content\") pod \"certified-operators-8dcd6\" (UID: \"8e00889e-5f62-4c41-971c-f9ef4ed0d77e\") " pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.081930 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72jz5\" (UniqueName: \"kubernetes.io/projected/8e00889e-5f62-4c41-971c-f9ef4ed0d77e-kube-api-access-72jz5\") pod \"certified-operators-8dcd6\" (UID: \"8e00889e-5f62-4c41-971c-f9ef4ed0d77e\") " pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.182970 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e00889e-5f62-4c41-971c-f9ef4ed0d77e-utilities\") pod \"certified-operators-8dcd6\" (UID: \"8e00889e-5f62-4c41-971c-f9ef4ed0d77e\") " pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.183498 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e00889e-5f62-4c41-971c-f9ef4ed0d77e-utilities\") pod \"certified-operators-8dcd6\" (UID: \"8e00889e-5f62-4c41-971c-f9ef4ed0d77e\") " pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.183864 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e00889e-5f62-4c41-971c-f9ef4ed0d77e-catalog-content\") pod \"certified-operators-8dcd6\" (UID: \"8e00889e-5f62-4c41-971c-f9ef4ed0d77e\") " pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.183937 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-72jz5\" (UniqueName: \"kubernetes.io/projected/8e00889e-5f62-4c41-971c-f9ef4ed0d77e-kube-api-access-72jz5\") pod \"certified-operators-8dcd6\" (UID: \"8e00889e-5f62-4c41-971c-f9ef4ed0d77e\") " pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.184133 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e00889e-5f62-4c41-971c-f9ef4ed0d77e-catalog-content\") pod \"certified-operators-8dcd6\" (UID: \"8e00889e-5f62-4c41-971c-f9ef4ed0d77e\") " pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.204514 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-72jz5\" (UniqueName: \"kubernetes.io/projected/8e00889e-5f62-4c41-971c-f9ef4ed0d77e-kube-api-access-72jz5\") pod \"certified-operators-8dcd6\" (UID: \"8e00889e-5f62-4c41-971c-f9ef4ed0d77e\") " pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.352599 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.770961 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8dcd6"] Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.831753 5119 generic.go:358] "Generic (PLEG): container finished" podID="0f41c580-660a-421c-8be8-7ec588566fe5" containerID="8d0aa420b2cbe8e17af2cfd61d114d3fe08e2da45f26faed4b171414f67eeee7" exitCode=0 Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.831796 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-695sn" event={"ID":"0f41c580-660a-421c-8be8-7ec588566fe5","Type":"ContainerDied","Data":"8d0aa420b2cbe8e17af2cfd61d114d3fe08e2da45f26faed4b171414f67eeee7"} Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.831843 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-695sn" event={"ID":"0f41c580-660a-421c-8be8-7ec588566fe5","Type":"ContainerStarted","Data":"77ab43e9130ef01eb5fd4ee92bbe8322b3e4ca1de0ca72d8156a27127925c2b7"} Jan 21 09:59:47 crc kubenswrapper[5119]: I0121 09:59:47.833000 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dcd6" event={"ID":"8e00889e-5f62-4c41-971c-f9ef4ed0d77e","Type":"ContainerStarted","Data":"991ae4df8dcb3e4b05ec037d2cf27b07d0bb2249d10c7ca64b433c4af8dbe564"} Jan 21 09:59:48 crc kubenswrapper[5119]: I0121 09:59:48.825620 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pc5lh"] Jan 21 09:59:48 crc kubenswrapper[5119]: I0121 09:59:48.834049 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:48 crc kubenswrapper[5119]: I0121 09:59:48.834877 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pc5lh"] Jan 21 09:59:48 crc kubenswrapper[5119]: I0121 09:59:48.838588 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 09:59:48 crc kubenswrapper[5119]: I0121 09:59:48.844541 5119 generic.go:358] "Generic (PLEG): container finished" podID="0f41c580-660a-421c-8be8-7ec588566fe5" containerID="7f3eb5abd8e3d690dd45e39210d46c9c6bdac0544d329dd30a422d2c058f5038" exitCode=0 Jan 21 09:59:48 crc kubenswrapper[5119]: I0121 09:59:48.844733 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-695sn" event={"ID":"0f41c580-660a-421c-8be8-7ec588566fe5","Type":"ContainerDied","Data":"7f3eb5abd8e3d690dd45e39210d46c9c6bdac0544d329dd30a422d2c058f5038"} Jan 21 09:59:48 crc kubenswrapper[5119]: I0121 09:59:48.854758 5119 generic.go:358] "Generic (PLEG): container finished" podID="8e00889e-5f62-4c41-971c-f9ef4ed0d77e" containerID="8e7632ca39488cb6ddfde5b6ee41838f88045beef1ba5d325e3a816de27342c1" exitCode=0 Jan 21 09:59:48 crc kubenswrapper[5119]: I0121 09:59:48.855139 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dcd6" event={"ID":"8e00889e-5f62-4c41-971c-f9ef4ed0d77e","Type":"ContainerDied","Data":"8e7632ca39488cb6ddfde5b6ee41838f88045beef1ba5d325e3a816de27342c1"} Jan 21 09:59:48 crc kubenswrapper[5119]: I0121 09:59:48.909817 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbgbt\" (UniqueName: \"kubernetes.io/projected/eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321-kube-api-access-hbgbt\") pod \"redhat-operators-pc5lh\" (UID: \"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321\") " pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:48 crc kubenswrapper[5119]: I0121 09:59:48.909921 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321-utilities\") pod \"redhat-operators-pc5lh\" (UID: \"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321\") " pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:48 crc kubenswrapper[5119]: I0121 09:59:48.910036 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321-catalog-content\") pod \"redhat-operators-pc5lh\" (UID: \"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321\") " pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.011009 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321-catalog-content\") pod \"redhat-operators-pc5lh\" (UID: \"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321\") " pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.011116 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbgbt\" (UniqueName: \"kubernetes.io/projected/eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321-kube-api-access-hbgbt\") pod \"redhat-operators-pc5lh\" (UID: \"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321\") " pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.011169 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321-utilities\") pod \"redhat-operators-pc5lh\" (UID: \"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321\") " pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.011529 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321-catalog-content\") pod \"redhat-operators-pc5lh\" (UID: \"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321\") " pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.011564 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321-utilities\") pod \"redhat-operators-pc5lh\" (UID: \"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321\") " pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.040932 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbgbt\" (UniqueName: \"kubernetes.io/projected/eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321-kube-api-access-hbgbt\") pod \"redhat-operators-pc5lh\" (UID: \"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321\") " pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.109734 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc"] Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.160049 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.420864 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hqs4l"] Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.430523 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.434253 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.441127 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqs4l"] Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.516427 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1563cf60-a66c-484e-bc5d-6dd7571d55a6-catalog-content\") pod \"community-operators-hqs4l\" (UID: \"1563cf60-a66c-484e-bc5d-6dd7571d55a6\") " pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.516504 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhqpv\" (UniqueName: \"kubernetes.io/projected/1563cf60-a66c-484e-bc5d-6dd7571d55a6-kube-api-access-dhqpv\") pod \"community-operators-hqs4l\" (UID: \"1563cf60-a66c-484e-bc5d-6dd7571d55a6\") " pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.516589 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1563cf60-a66c-484e-bc5d-6dd7571d55a6-utilities\") pod \"community-operators-hqs4l\" (UID: \"1563cf60-a66c-484e-bc5d-6dd7571d55a6\") " pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.547471 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pc5lh"] Jan 21 09:59:49 crc kubenswrapper[5119]: W0121 09:59:49.564274 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb15b3ad_46ff_46ed_bd2b_b15ac9c4a321.slice/crio-5d06a04debf8f3c4a55ae48316b6f0a4abb8c459518b9f666fbfa70cc8867075 WatchSource:0}: Error finding container 5d06a04debf8f3c4a55ae48316b6f0a4abb8c459518b9f666fbfa70cc8867075: Status 404 returned error can't find the container with id 5d06a04debf8f3c4a55ae48316b6f0a4abb8c459518b9f666fbfa70cc8867075 Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.619354 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1563cf60-a66c-484e-bc5d-6dd7571d55a6-catalog-content\") pod \"community-operators-hqs4l\" (UID: \"1563cf60-a66c-484e-bc5d-6dd7571d55a6\") " pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.619651 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dhqpv\" (UniqueName: \"kubernetes.io/projected/1563cf60-a66c-484e-bc5d-6dd7571d55a6-kube-api-access-dhqpv\") pod \"community-operators-hqs4l\" (UID: \"1563cf60-a66c-484e-bc5d-6dd7571d55a6\") " pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.619843 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1563cf60-a66c-484e-bc5d-6dd7571d55a6-utilities\") pod \"community-operators-hqs4l\" (UID: \"1563cf60-a66c-484e-bc5d-6dd7571d55a6\") " pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.619856 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1563cf60-a66c-484e-bc5d-6dd7571d55a6-catalog-content\") pod \"community-operators-hqs4l\" (UID: \"1563cf60-a66c-484e-bc5d-6dd7571d55a6\") " pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.620114 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1563cf60-a66c-484e-bc5d-6dd7571d55a6-utilities\") pod \"community-operators-hqs4l\" (UID: \"1563cf60-a66c-484e-bc5d-6dd7571d55a6\") " pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.639150 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhqpv\" (UniqueName: \"kubernetes.io/projected/1563cf60-a66c-484e-bc5d-6dd7571d55a6-kube-api-access-dhqpv\") pod \"community-operators-hqs4l\" (UID: \"1563cf60-a66c-484e-bc5d-6dd7571d55a6\") " pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.748646 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.862358 5119 generic.go:358] "Generic (PLEG): container finished" podID="8e00889e-5f62-4c41-971c-f9ef4ed0d77e" containerID="1c9d9f29a704cb2cd955915160cfc0c19ff5c5a4838550237011e66d3c74333c" exitCode=0 Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.862403 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dcd6" event={"ID":"8e00889e-5f62-4c41-971c-f9ef4ed0d77e","Type":"ContainerDied","Data":"1c9d9f29a704cb2cd955915160cfc0c19ff5c5a4838550237011e66d3c74333c"} Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.865581 5119 generic.go:358] "Generic (PLEG): container finished" podID="eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321" containerID="4a1074c19286c850038474f2550545de1be1ef0764fc48d4e0dfc3a2bc820474" exitCode=0 Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.865782 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pc5lh" event={"ID":"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321","Type":"ContainerDied","Data":"4a1074c19286c850038474f2550545de1be1ef0764fc48d4e0dfc3a2bc820474"} Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.865821 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pc5lh" event={"ID":"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321","Type":"ContainerStarted","Data":"5d06a04debf8f3c4a55ae48316b6f0a4abb8c459518b9f666fbfa70cc8867075"} Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.874441 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-695sn" event={"ID":"0f41c580-660a-421c-8be8-7ec588566fe5","Type":"ContainerStarted","Data":"47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73"} Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.874730 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" podUID="afe35b85-6f80-41f6-8ca1-62fe862e7eee" containerName="route-controller-manager" containerID="cri-o://2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca" gracePeriod=30 Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.899879 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-695sn" podStartSLOduration=3.268558532 podStartE2EDuration="3.899862721s" podCreationTimestamp="2026-01-21 09:59:46 +0000 UTC" firstStartedPulling="2026-01-21 09:59:47.832712798 +0000 UTC m=+303.500804466" lastFinishedPulling="2026-01-21 09:59:48.464016977 +0000 UTC m=+304.132108655" observedRunningTime="2026-01-21 09:59:49.899003198 +0000 UTC m=+305.567094876" watchObservedRunningTime="2026-01-21 09:59:49.899862721 +0000 UTC m=+305.567954409" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.918409 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.920621 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.920693 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.922094 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d40e23f49b5ccbd79a7b7631bfb1923bda39e2fe75a27231461f8d5e6aec28b1"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.922177 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://d40e23f49b5ccbd79a7b7631bfb1923bda39e2fe75a27231461f8d5e6aec28b1" gracePeriod=600 Jan 21 09:59:49 crc kubenswrapper[5119]: I0121 09:59:49.957464 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqs4l"] Jan 21 09:59:49 crc kubenswrapper[5119]: W0121 09:59:49.967350 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1563cf60_a66c_484e_bc5d_6dd7571d55a6.slice/crio-b16080976bbcf682e8d4f8136e06d7aa2a1acfad56aa06b97ed4652c08978059 WatchSource:0}: Error finding container b16080976bbcf682e8d4f8136e06d7aa2a1acfad56aa06b97ed4652c08978059: Status 404 returned error can't find the container with id b16080976bbcf682e8d4f8136e06d7aa2a1acfad56aa06b97ed4652c08978059 Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.207401 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.276015 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-client-ca\") pod \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.276057 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afe35b85-6f80-41f6-8ca1-62fe862e7eee-tmp\") pod \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.276101 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-config\") pod \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.276118 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6f9h\" (UniqueName: \"kubernetes.io/projected/afe35b85-6f80-41f6-8ca1-62fe862e7eee-kube-api-access-r6f9h\") pod \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.276150 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe35b85-6f80-41f6-8ca1-62fe862e7eee-serving-cert\") pod \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\" (UID: \"afe35b85-6f80-41f6-8ca1-62fe862e7eee\") " Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.278986 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe35b85-6f80-41f6-8ca1-62fe862e7eee-tmp" (OuterVolumeSpecName: "tmp") pod "afe35b85-6f80-41f6-8ca1-62fe862e7eee" (UID: "afe35b85-6f80-41f6-8ca1-62fe862e7eee"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.279525 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-client-ca" (OuterVolumeSpecName: "client-ca") pod "afe35b85-6f80-41f6-8ca1-62fe862e7eee" (UID: "afe35b85-6f80-41f6-8ca1-62fe862e7eee"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.280229 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-config" (OuterVolumeSpecName: "config") pod "afe35b85-6f80-41f6-8ca1-62fe862e7eee" (UID: "afe35b85-6f80-41f6-8ca1-62fe862e7eee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.283071 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe35b85-6f80-41f6-8ca1-62fe862e7eee-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "afe35b85-6f80-41f6-8ca1-62fe862e7eee" (UID: "afe35b85-6f80-41f6-8ca1-62fe862e7eee"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.283636 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe35b85-6f80-41f6-8ca1-62fe862e7eee-kube-api-access-r6f9h" (OuterVolumeSpecName: "kube-api-access-r6f9h") pod "afe35b85-6f80-41f6-8ca1-62fe862e7eee" (UID: "afe35b85-6f80-41f6-8ca1-62fe862e7eee"). InnerVolumeSpecName "kube-api-access-r6f9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.284789 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn"] Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.285405 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="afe35b85-6f80-41f6-8ca1-62fe862e7eee" containerName="route-controller-manager" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.285423 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe35b85-6f80-41f6-8ca1-62fe862e7eee" containerName="route-controller-manager" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.285560 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="afe35b85-6f80-41f6-8ca1-62fe862e7eee" containerName="route-controller-manager" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.295950 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn"] Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.296053 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.377571 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7068908-1db9-4cb3-8327-af7d28de71e0-tmp\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.377667 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg7q2\" (UniqueName: \"kubernetes.io/projected/a7068908-1db9-4cb3-8327-af7d28de71e0-kube-api-access-dg7q2\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.377688 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7068908-1db9-4cb3-8327-af7d28de71e0-serving-cert\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.377719 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7068908-1db9-4cb3-8327-af7d28de71e0-config\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.377741 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7068908-1db9-4cb3-8327-af7d28de71e0-client-ca\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.377837 5119 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.377849 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r6f9h\" (UniqueName: \"kubernetes.io/projected/afe35b85-6f80-41f6-8ca1-62fe862e7eee-kube-api-access-r6f9h\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.377860 5119 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afe35b85-6f80-41f6-8ca1-62fe862e7eee-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.377868 5119 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afe35b85-6f80-41f6-8ca1-62fe862e7eee-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.377876 5119 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afe35b85-6f80-41f6-8ca1-62fe862e7eee-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.478451 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7068908-1db9-4cb3-8327-af7d28de71e0-tmp\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.478528 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dg7q2\" (UniqueName: \"kubernetes.io/projected/a7068908-1db9-4cb3-8327-af7d28de71e0-kube-api-access-dg7q2\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.478550 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7068908-1db9-4cb3-8327-af7d28de71e0-serving-cert\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.478581 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7068908-1db9-4cb3-8327-af7d28de71e0-config\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.478612 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7068908-1db9-4cb3-8327-af7d28de71e0-client-ca\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.479932 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7068908-1db9-4cb3-8327-af7d28de71e0-client-ca\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.480018 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7068908-1db9-4cb3-8327-af7d28de71e0-tmp\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.480131 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7068908-1db9-4cb3-8327-af7d28de71e0-config\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.485667 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7068908-1db9-4cb3-8327-af7d28de71e0-serving-cert\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.498455 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg7q2\" (UniqueName: \"kubernetes.io/projected/a7068908-1db9-4cb3-8327-af7d28de71e0-kube-api-access-dg7q2\") pod \"route-controller-manager-9cc448f57-9b2vn\" (UID: \"a7068908-1db9-4cb3-8327-af7d28de71e0\") " pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.609227 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.884709 5119 generic.go:358] "Generic (PLEG): container finished" podID="1563cf60-a66c-484e-bc5d-6dd7571d55a6" containerID="cb12f4a67a06688eba5a2ba048e807ef19d39a05075346576b4472ed0af4fff0" exitCode=0 Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.884793 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqs4l" event={"ID":"1563cf60-a66c-484e-bc5d-6dd7571d55a6","Type":"ContainerDied","Data":"cb12f4a67a06688eba5a2ba048e807ef19d39a05075346576b4472ed0af4fff0"} Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.885072 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqs4l" event={"ID":"1563cf60-a66c-484e-bc5d-6dd7571d55a6","Type":"ContainerStarted","Data":"b16080976bbcf682e8d4f8136e06d7aa2a1acfad56aa06b97ed4652c08978059"} Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.894104 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dcd6" event={"ID":"8e00889e-5f62-4c41-971c-f9ef4ed0d77e","Type":"ContainerStarted","Data":"b17ca0838a77241eb2bd3a4625cbb13c15963287816df5458c72c0797e8bb4aa"} Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.895991 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="d40e23f49b5ccbd79a7b7631bfb1923bda39e2fe75a27231461f8d5e6aec28b1" exitCode=0 Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.896028 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"d40e23f49b5ccbd79a7b7631bfb1923bda39e2fe75a27231461f8d5e6aec28b1"} Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.896072 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"ccf407c7bf9fef5463fbdd0f20c4692fd497cff47399ce80319fb6eadef27ee1"} Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.897481 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pc5lh" event={"ID":"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321","Type":"ContainerStarted","Data":"471b34fa78f825b1bc5ff910b4da618395e0d9392d6ccfa0f531b35c2824958e"} Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.898622 5119 generic.go:358] "Generic (PLEG): container finished" podID="afe35b85-6f80-41f6-8ca1-62fe862e7eee" containerID="2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca" exitCode=0 Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.899027 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.899340 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" event={"ID":"afe35b85-6f80-41f6-8ca1-62fe862e7eee","Type":"ContainerDied","Data":"2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca"} Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.899359 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc" event={"ID":"afe35b85-6f80-41f6-8ca1-62fe862e7eee","Type":"ContainerDied","Data":"97daa0d3def6cc3a91013cbd48d8967372f9ed84968f1e630d20e06bc2167e82"} Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.899375 5119 scope.go:117] "RemoveContainer" containerID="2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.922647 5119 scope.go:117] "RemoveContainer" containerID="2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca" Jan 21 09:59:50 crc kubenswrapper[5119]: E0121 09:59:50.923014 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca\": container with ID starting with 2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca not found: ID does not exist" containerID="2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.923048 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca"} err="failed to get container status \"2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca\": rpc error: code = NotFound desc = could not find container \"2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca\": container with ID starting with 2ed03dfb5bde5bb6b7ad11384a57bd105822d0fcfa316029c5be245dbdb1c2ca not found: ID does not exist" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.938422 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8dcd6" podStartSLOduration=3.331520043 podStartE2EDuration="3.938404174s" podCreationTimestamp="2026-01-21 09:59:47 +0000 UTC" firstStartedPulling="2026-01-21 09:59:48.857036802 +0000 UTC m=+304.525128490" lastFinishedPulling="2026-01-21 09:59:49.463920943 +0000 UTC m=+305.132012621" observedRunningTime="2026-01-21 09:59:50.937035466 +0000 UTC m=+306.605127154" watchObservedRunningTime="2026-01-21 09:59:50.938404174 +0000 UTC m=+306.606495852" Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.948795 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc"] Jan 21 09:59:50 crc kubenswrapper[5119]: I0121 09:59:50.953028 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f4bdfb-p2wqc"] Jan 21 09:59:51 crc kubenswrapper[5119]: I0121 09:59:51.015676 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn"] Jan 21 09:59:51 crc kubenswrapper[5119]: I0121 09:59:51.908543 5119 generic.go:358] "Generic (PLEG): container finished" podID="eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321" containerID="471b34fa78f825b1bc5ff910b4da618395e0d9392d6ccfa0f531b35c2824958e" exitCode=0 Jan 21 09:59:51 crc kubenswrapper[5119]: I0121 09:59:51.908743 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pc5lh" event={"ID":"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321","Type":"ContainerDied","Data":"471b34fa78f825b1bc5ff910b4da618395e0d9392d6ccfa0f531b35c2824958e"} Jan 21 09:59:51 crc kubenswrapper[5119]: I0121 09:59:51.917117 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" event={"ID":"a7068908-1db9-4cb3-8327-af7d28de71e0","Type":"ContainerStarted","Data":"ecfd0a61441835fc7a5bfc238350c6be521831e858b99d1c50b9924abb294e3d"} Jan 21 09:59:51 crc kubenswrapper[5119]: I0121 09:59:51.917158 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" event={"ID":"a7068908-1db9-4cb3-8327-af7d28de71e0","Type":"ContainerStarted","Data":"39138cca6d9ee111c4553697fe4b221ca24af16a5d9e87ed5df1cd278e497702"} Jan 21 09:59:51 crc kubenswrapper[5119]: I0121 09:59:51.920044 5119 generic.go:358] "Generic (PLEG): container finished" podID="1563cf60-a66c-484e-bc5d-6dd7571d55a6" containerID="d9ed1de9132cb30044a159e096a874bed965010aa8ba0370728f47f1b6dad4c7" exitCode=0 Jan 21 09:59:51 crc kubenswrapper[5119]: I0121 09:59:51.920637 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqs4l" event={"ID":"1563cf60-a66c-484e-bc5d-6dd7571d55a6","Type":"ContainerDied","Data":"d9ed1de9132cb30044a159e096a874bed965010aa8ba0370728f47f1b6dad4c7"} Jan 21 09:59:51 crc kubenswrapper[5119]: I0121 09:59:51.921148 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:51 crc kubenswrapper[5119]: I0121 09:59:51.930009 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" Jan 21 09:59:51 crc kubenswrapper[5119]: I0121 09:59:51.998574 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9cc448f57-9b2vn" podStartSLOduration=2.998547166 podStartE2EDuration="2.998547166s" podCreationTimestamp="2026-01-21 09:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:59:51.971757485 +0000 UTC m=+307.639849163" watchObservedRunningTime="2026-01-21 09:59:51.998547166 +0000 UTC m=+307.666638854" Jan 21 09:59:52 crc kubenswrapper[5119]: I0121 09:59:52.597338 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe35b85-6f80-41f6-8ca1-62fe862e7eee" path="/var/lib/kubelet/pods/afe35b85-6f80-41f6-8ca1-62fe862e7eee/volumes" Jan 21 09:59:52 crc kubenswrapper[5119]: I0121 09:59:52.928347 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pc5lh" event={"ID":"eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321","Type":"ContainerStarted","Data":"c5e2100bca6a891ada7d26f100c32b65f243b5b8506cbe4a560dfeeb26af24dd"} Jan 21 09:59:52 crc kubenswrapper[5119]: I0121 09:59:52.930911 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqs4l" event={"ID":"1563cf60-a66c-484e-bc5d-6dd7571d55a6","Type":"ContainerStarted","Data":"16683ab55c3e3733b61389fd05a831a73dcc5930eabef17b86ce339479722e00"} Jan 21 09:59:52 crc kubenswrapper[5119]: I0121 09:59:52.950373 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pc5lh" podStartSLOduration=4.25398394 podStartE2EDuration="4.950357898s" podCreationTimestamp="2026-01-21 09:59:48 +0000 UTC" firstStartedPulling="2026-01-21 09:59:49.867334242 +0000 UTC m=+305.535425920" lastFinishedPulling="2026-01-21 09:59:50.5637082 +0000 UTC m=+306.231799878" observedRunningTime="2026-01-21 09:59:52.948661911 +0000 UTC m=+308.616753589" watchObservedRunningTime="2026-01-21 09:59:52.950357898 +0000 UTC m=+308.618449576" Jan 21 09:59:52 crc kubenswrapper[5119]: I0121 09:59:52.969107 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hqs4l" podStartSLOduration=3.36446144 podStartE2EDuration="3.96909107s" podCreationTimestamp="2026-01-21 09:59:49 +0000 UTC" firstStartedPulling="2026-01-21 09:59:50.885571389 +0000 UTC m=+306.553663067" lastFinishedPulling="2026-01-21 09:59:51.490201029 +0000 UTC m=+307.158292697" observedRunningTime="2026-01-21 09:59:52.964393941 +0000 UTC m=+308.632485619" watchObservedRunningTime="2026-01-21 09:59:52.96909107 +0000 UTC m=+308.637182738" Jan 21 09:59:53 crc kubenswrapper[5119]: I0121 09:59:53.936475 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vxrkh"] Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.111121 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vxrkh"] Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.111297 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.223547 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.223638 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.223796 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.223841 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lppxc\" (UniqueName: \"kubernetes.io/projected/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-kube-api-access-lppxc\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.223917 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-registry-certificates\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.223943 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-trusted-ca\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.223962 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.224000 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-registry-tls\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.244294 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.325289 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.325674 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lppxc\" (UniqueName: \"kubernetes.io/projected/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-kube-api-access-lppxc\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.325727 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-registry-certificates\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.325759 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-trusted-ca\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.325785 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.325821 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-registry-tls\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.325879 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.326320 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.327228 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-registry-certificates\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.327686 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-trusted-ca\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.338339 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.338389 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-registry-tls\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.346505 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lppxc\" (UniqueName: \"kubernetes.io/projected/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-kube-api-access-lppxc\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.349943 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vxrkh\" (UID: \"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.429215 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.831051 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vxrkh"] Jan 21 09:59:54 crc kubenswrapper[5119]: W0121 09:59:54.837040 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d9f0e2d_ed1f_4fbe_87c6_154eeda16a94.slice/crio-63cddf3efca26575e2bf0ee8fe6f2f723dcb70e0365e0dbf8d99945d20753396 WatchSource:0}: Error finding container 63cddf3efca26575e2bf0ee8fe6f2f723dcb70e0365e0dbf8d99945d20753396: Status 404 returned error can't find the container with id 63cddf3efca26575e2bf0ee8fe6f2f723dcb70e0365e0dbf8d99945d20753396 Jan 21 09:59:54 crc kubenswrapper[5119]: I0121 09:59:54.942875 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" event={"ID":"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94","Type":"ContainerStarted","Data":"63cddf3efca26575e2bf0ee8fe6f2f723dcb70e0365e0dbf8d99945d20753396"} Jan 21 09:59:55 crc kubenswrapper[5119]: I0121 09:59:55.948926 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" event={"ID":"7d9f0e2d-ed1f-4fbe-87c6-154eeda16a94","Type":"ContainerStarted","Data":"ec5acdc1322b2daa78427a1c99208cd9175c2d9a919e5b37d8668818d30cb3e3"} Jan 21 09:59:55 crc kubenswrapper[5119]: I0121 09:59:55.950428 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 09:59:55 crc kubenswrapper[5119]: I0121 09:59:55.968271 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" podStartSLOduration=2.968258782 podStartE2EDuration="2.968258782s" podCreationTimestamp="2026-01-21 09:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:59:55.965563579 +0000 UTC m=+311.633655257" watchObservedRunningTime="2026-01-21 09:59:55.968258782 +0000 UTC m=+311.636350460" Jan 21 09:59:56 crc kubenswrapper[5119]: E0121 09:59:56.481636 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee0294ff_f61f_492b_b738_fbbee8f757eb.slice/crio-49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a.scope\": RecentStats: unable to find data in memory cache]" Jan 21 09:59:56 crc kubenswrapper[5119]: I0121 09:59:56.742443 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:56 crc kubenswrapper[5119]: I0121 09:59:56.742788 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:56 crc kubenswrapper[5119]: I0121 09:59:56.782370 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:56 crc kubenswrapper[5119]: I0121 09:59:56.988265 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 09:59:57 crc kubenswrapper[5119]: I0121 09:59:57.353215 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:57 crc kubenswrapper[5119]: I0121 09:59:57.353321 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:57 crc kubenswrapper[5119]: I0121 09:59:57.389527 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:57 crc kubenswrapper[5119]: I0121 09:59:57.993452 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8dcd6" Jan 21 09:59:59 crc kubenswrapper[5119]: I0121 09:59:59.160577 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:59 crc kubenswrapper[5119]: I0121 09:59:59.162047 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:59 crc kubenswrapper[5119]: I0121 09:59:59.200509 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 09:59:59 crc kubenswrapper[5119]: I0121 09:59:59.749248 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:59 crc kubenswrapper[5119]: I0121 09:59:59.749634 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-hqs4l" Jan 21 09:59:59 crc kubenswrapper[5119]: I0121 09:59:59.784217 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hqs4l" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.019222 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pc5lh" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.019462 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hqs4l" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.158160 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483160-fn4lx"] Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.180503 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483160-fn4lx"] Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.180542 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns"] Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.180746 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483160-fn4lx" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.186235 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns"] Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.186319 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.186343 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.186662 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.186869 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.190137 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.191020 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.320328 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e21db20a-6f9d-4663-bbf3-8e729ec4774f-secret-volume\") pod \"collect-profiles-29483160-xpdns\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.320409 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnclb\" (UniqueName: \"kubernetes.io/projected/e21db20a-6f9d-4663-bbf3-8e729ec4774f-kube-api-access-rnclb\") pod \"collect-profiles-29483160-xpdns\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.320439 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e21db20a-6f9d-4663-bbf3-8e729ec4774f-config-volume\") pod \"collect-profiles-29483160-xpdns\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.320467 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkg77\" (UniqueName: \"kubernetes.io/projected/e8718e06-ef68-4354-92ff-67ea0a52da09-kube-api-access-jkg77\") pod \"auto-csr-approver-29483160-fn4lx\" (UID: \"e8718e06-ef68-4354-92ff-67ea0a52da09\") " pod="openshift-infra/auto-csr-approver-29483160-fn4lx" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.421902 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jkg77\" (UniqueName: \"kubernetes.io/projected/e8718e06-ef68-4354-92ff-67ea0a52da09-kube-api-access-jkg77\") pod \"auto-csr-approver-29483160-fn4lx\" (UID: \"e8718e06-ef68-4354-92ff-67ea0a52da09\") " pod="openshift-infra/auto-csr-approver-29483160-fn4lx" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.422145 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e21db20a-6f9d-4663-bbf3-8e729ec4774f-secret-volume\") pod \"collect-profiles-29483160-xpdns\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.422207 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rnclb\" (UniqueName: \"kubernetes.io/projected/e21db20a-6f9d-4663-bbf3-8e729ec4774f-kube-api-access-rnclb\") pod \"collect-profiles-29483160-xpdns\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.422234 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e21db20a-6f9d-4663-bbf3-8e729ec4774f-config-volume\") pod \"collect-profiles-29483160-xpdns\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.422999 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e21db20a-6f9d-4663-bbf3-8e729ec4774f-config-volume\") pod \"collect-profiles-29483160-xpdns\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.439065 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e21db20a-6f9d-4663-bbf3-8e729ec4774f-secret-volume\") pod \"collect-profiles-29483160-xpdns\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.440010 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkg77\" (UniqueName: \"kubernetes.io/projected/e8718e06-ef68-4354-92ff-67ea0a52da09-kube-api-access-jkg77\") pod \"auto-csr-approver-29483160-fn4lx\" (UID: \"e8718e06-ef68-4354-92ff-67ea0a52da09\") " pod="openshift-infra/auto-csr-approver-29483160-fn4lx" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.445167 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnclb\" (UniqueName: \"kubernetes.io/projected/e21db20a-6f9d-4663-bbf3-8e729ec4774f-kube-api-access-rnclb\") pod \"collect-profiles-29483160-xpdns\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.504904 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483160-fn4lx" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.513146 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.928194 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483160-fn4lx"] Jan 21 10:00:00 crc kubenswrapper[5119]: W0121 10:00:00.933202 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8718e06_ef68_4354_92ff_67ea0a52da09.slice/crio-07bb9eddc8a1ffe6a789d2108bd642d2ba0c71e662f2fa29fd9ca3b7d479133c WatchSource:0}: Error finding container 07bb9eddc8a1ffe6a789d2108bd642d2ba0c71e662f2fa29fd9ca3b7d479133c: Status 404 returned error can't find the container with id 07bb9eddc8a1ffe6a789d2108bd642d2ba0c71e662f2fa29fd9ca3b7d479133c Jan 21 10:00:00 crc kubenswrapper[5119]: I0121 10:00:00.972399 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483160-fn4lx" event={"ID":"e8718e06-ef68-4354-92ff-67ea0a52da09","Type":"ContainerStarted","Data":"07bb9eddc8a1ffe6a789d2108bd642d2ba0c71e662f2fa29fd9ca3b7d479133c"} Jan 21 10:00:01 crc kubenswrapper[5119]: I0121 10:00:01.018574 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns"] Jan 21 10:00:01 crc kubenswrapper[5119]: W0121 10:00:01.029969 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode21db20a_6f9d_4663_bbf3_8e729ec4774f.slice/crio-0ad43c073797fbd9c9b62ee53a715b3a12c31b30b1f1b471d8c7258a4cca3d54 WatchSource:0}: Error finding container 0ad43c073797fbd9c9b62ee53a715b3a12c31b30b1f1b471d8c7258a4cca3d54: Status 404 returned error can't find the container with id 0ad43c073797fbd9c9b62ee53a715b3a12c31b30b1f1b471d8c7258a4cca3d54 Jan 21 10:00:01 crc kubenswrapper[5119]: I0121 10:00:01.979826 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" event={"ID":"e21db20a-6f9d-4663-bbf3-8e729ec4774f","Type":"ContainerStarted","Data":"0ad43c073797fbd9c9b62ee53a715b3a12c31b30b1f1b471d8c7258a4cca3d54"} Jan 21 10:00:02 crc kubenswrapper[5119]: I0121 10:00:02.987792 5119 generic.go:358] "Generic (PLEG): container finished" podID="e21db20a-6f9d-4663-bbf3-8e729ec4774f" containerID="3895c8ae006c3a30aa062e47968a8470d647dfc18c0e6e4f8de52a746e7bfe5d" exitCode=0 Jan 21 10:00:02 crc kubenswrapper[5119]: I0121 10:00:02.987852 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" event={"ID":"e21db20a-6f9d-4663-bbf3-8e729ec4774f","Type":"ContainerDied","Data":"3895c8ae006c3a30aa062e47968a8470d647dfc18c0e6e4f8de52a746e7bfe5d"} Jan 21 10:00:04 crc kubenswrapper[5119]: I0121 10:00:04.315706 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:04 crc kubenswrapper[5119]: I0121 10:00:04.476709 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e21db20a-6f9d-4663-bbf3-8e729ec4774f-secret-volume\") pod \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " Jan 21 10:00:04 crc kubenswrapper[5119]: I0121 10:00:04.476842 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e21db20a-6f9d-4663-bbf3-8e729ec4774f-config-volume\") pod \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " Jan 21 10:00:04 crc kubenswrapper[5119]: I0121 10:00:04.476925 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnclb\" (UniqueName: \"kubernetes.io/projected/e21db20a-6f9d-4663-bbf3-8e729ec4774f-kube-api-access-rnclb\") pod \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\" (UID: \"e21db20a-6f9d-4663-bbf3-8e729ec4774f\") " Jan 21 10:00:04 crc kubenswrapper[5119]: I0121 10:00:04.478051 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21db20a-6f9d-4663-bbf3-8e729ec4774f-config-volume" (OuterVolumeSpecName: "config-volume") pod "e21db20a-6f9d-4663-bbf3-8e729ec4774f" (UID: "e21db20a-6f9d-4663-bbf3-8e729ec4774f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:00:04 crc kubenswrapper[5119]: I0121 10:00:04.482843 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e21db20a-6f9d-4663-bbf3-8e729ec4774f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e21db20a-6f9d-4663-bbf3-8e729ec4774f" (UID: "e21db20a-6f9d-4663-bbf3-8e729ec4774f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:00:04 crc kubenswrapper[5119]: I0121 10:00:04.486740 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21db20a-6f9d-4663-bbf3-8e729ec4774f-kube-api-access-rnclb" (OuterVolumeSpecName: "kube-api-access-rnclb") pod "e21db20a-6f9d-4663-bbf3-8e729ec4774f" (UID: "e21db20a-6f9d-4663-bbf3-8e729ec4774f"). InnerVolumeSpecName "kube-api-access-rnclb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:00:04 crc kubenswrapper[5119]: I0121 10:00:04.578585 5119 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e21db20a-6f9d-4663-bbf3-8e729ec4774f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:04 crc kubenswrapper[5119]: I0121 10:00:04.578641 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rnclb\" (UniqueName: \"kubernetes.io/projected/e21db20a-6f9d-4663-bbf3-8e729ec4774f-kube-api-access-rnclb\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:04 crc kubenswrapper[5119]: I0121 10:00:04.578659 5119 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e21db20a-6f9d-4663-bbf3-8e729ec4774f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:05 crc kubenswrapper[5119]: I0121 10:00:05.001782 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" event={"ID":"e21db20a-6f9d-4663-bbf3-8e729ec4774f","Type":"ContainerDied","Data":"0ad43c073797fbd9c9b62ee53a715b3a12c31b30b1f1b471d8c7258a4cca3d54"} Jan 21 10:00:05 crc kubenswrapper[5119]: I0121 10:00:05.001823 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ad43c073797fbd9c9b62ee53a715b3a12c31b30b1f1b471d8c7258a4cca3d54" Jan 21 10:00:05 crc kubenswrapper[5119]: I0121 10:00:05.001911 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns" Jan 21 10:00:06 crc kubenswrapper[5119]: E0121 10:00:06.593776 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee0294ff_f61f_492b_b738_fbbee8f757eb.slice/crio-49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a.scope\": RecentStats: unable to find data in memory cache]" Jan 21 10:00:07 crc kubenswrapper[5119]: I0121 10:00:07.517860 5119 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-f9trx" Jan 21 10:00:07 crc kubenswrapper[5119]: I0121 10:00:07.536926 5119 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-f9trx" Jan 21 10:00:08 crc kubenswrapper[5119]: I0121 10:00:08.021046 5119 generic.go:358] "Generic (PLEG): container finished" podID="e8718e06-ef68-4354-92ff-67ea0a52da09" containerID="d5b54ddf4bdb1f499d4fd60b317952e9ac2c24159d2ed43fa4e588b7616d3d6b" exitCode=0 Jan 21 10:00:08 crc kubenswrapper[5119]: I0121 10:00:08.021176 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483160-fn4lx" event={"ID":"e8718e06-ef68-4354-92ff-67ea0a52da09","Type":"ContainerDied","Data":"d5b54ddf4bdb1f499d4fd60b317952e9ac2c24159d2ed43fa4e588b7616d3d6b"} Jan 21 10:00:08 crc kubenswrapper[5119]: I0121 10:00:08.538098 5119 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-20 09:55:07 +0000 UTC" deadline="2026-02-14 09:04:02.018867698 +0000 UTC" Jan 21 10:00:08 crc kubenswrapper[5119]: I0121 10:00:08.538556 5119 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="575h3m53.480318973s" Jan 21 10:00:09 crc kubenswrapper[5119]: I0121 10:00:09.310250 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483160-fn4lx" Jan 21 10:00:09 crc kubenswrapper[5119]: I0121 10:00:09.341233 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkg77\" (UniqueName: \"kubernetes.io/projected/e8718e06-ef68-4354-92ff-67ea0a52da09-kube-api-access-jkg77\") pod \"e8718e06-ef68-4354-92ff-67ea0a52da09\" (UID: \"e8718e06-ef68-4354-92ff-67ea0a52da09\") " Jan 21 10:00:09 crc kubenswrapper[5119]: I0121 10:00:09.352826 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8718e06-ef68-4354-92ff-67ea0a52da09-kube-api-access-jkg77" (OuterVolumeSpecName: "kube-api-access-jkg77") pod "e8718e06-ef68-4354-92ff-67ea0a52da09" (UID: "e8718e06-ef68-4354-92ff-67ea0a52da09"). InnerVolumeSpecName "kube-api-access-jkg77". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:00:09 crc kubenswrapper[5119]: I0121 10:00:09.443452 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jkg77\" (UniqueName: \"kubernetes.io/projected/e8718e06-ef68-4354-92ff-67ea0a52da09-kube-api-access-jkg77\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:09 crc kubenswrapper[5119]: I0121 10:00:09.539390 5119 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-20 09:55:07 +0000 UTC" deadline="2026-02-13 19:47:38.728342077 +0000 UTC" Jan 21 10:00:09 crc kubenswrapper[5119]: I0121 10:00:09.539426 5119 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="561h47m29.188918349s" Jan 21 10:00:09 crc kubenswrapper[5119]: I0121 10:00:09.574531 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" podUID="26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" containerName="oauth-openshift" containerID="cri-o://1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef" gracePeriod=15 Jan 21 10:00:09 crc kubenswrapper[5119]: I0121 10:00:09.975897 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.009075 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-89495cdd5-xgsp4"] Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.009822 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" containerName="oauth-openshift" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.009858 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" containerName="oauth-openshift" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.009874 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e21db20a-6f9d-4663-bbf3-8e729ec4774f" containerName="collect-profiles" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.009879 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e21db20a-6f9d-4663-bbf3-8e729ec4774f" containerName="collect-profiles" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.009895 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8718e06-ef68-4354-92ff-67ea0a52da09" containerName="oc" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.009901 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8718e06-ef68-4354-92ff-67ea0a52da09" containerName="oc" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.010020 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e21db20a-6f9d-4663-bbf3-8e729ec4774f" containerName="collect-profiles" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.010036 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e8718e06-ef68-4354-92ff-67ea0a52da09" containerName="oc" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.010044 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" containerName="oauth-openshift" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.017240 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.031875 5119 generic.go:358] "Generic (PLEG): container finished" podID="26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" containerID="1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef" exitCode=0 Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.031954 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" event={"ID":"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b","Type":"ContainerDied","Data":"1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef"} Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.031985 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" event={"ID":"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b","Type":"ContainerDied","Data":"2751cc9101704dcf4951f6960405fb0d28010502eb0b58aaad831044104aab69"} Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.032007 5119 scope.go:117] "RemoveContainer" containerID="1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.032139 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-7tls5" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.040883 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483160-fn4lx" event={"ID":"e8718e06-ef68-4354-92ff-67ea0a52da09","Type":"ContainerDied","Data":"07bb9eddc8a1ffe6a789d2108bd642d2ba0c71e662f2fa29fd9ca3b7d479133c"} Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.040968 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07bb9eddc8a1ffe6a789d2108bd642d2ba0c71e662f2fa29fd9ca3b7d479133c" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.041067 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483160-fn4lx" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.050882 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-89495cdd5-xgsp4"] Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.055731 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-login\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.055772 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-serving-cert\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.055807 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8czzd\" (UniqueName: \"kubernetes.io/projected/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-kube-api-access-8czzd\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.055851 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-error\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.055865 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-idp-0-file-data\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.055886 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-ocp-branding-template\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.055945 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-policies\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.055981 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-dir\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056000 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-provider-selection\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056021 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-cliconfig\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056071 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-service-ca\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056109 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-router-certs\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056146 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-session\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056167 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-trusted-ca-bundle\") pod \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\" (UID: \"26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b\") " Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056327 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8133747d-754e-44f8-b93f-b3a85d19b3cc-audit-dir\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056353 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w495m\" (UniqueName: \"kubernetes.io/projected/8133747d-754e-44f8-b93f-b3a85d19b3cc-kube-api-access-w495m\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056371 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056388 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-template-error\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056412 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056438 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056481 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056515 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056535 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056563 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-audit-policies\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056586 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-session\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056661 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056682 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.056701 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-template-login\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.063224 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.065114 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.065167 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.066094 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.069216 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.075933 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.077720 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.078150 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.078324 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.080562 5119 scope.go:117] "RemoveContainer" containerID="1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef" Jan 21 10:00:10 crc kubenswrapper[5119]: E0121 10:00:10.081921 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef\": container with ID starting with 1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef not found: ID does not exist" containerID="1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.081966 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef"} err="failed to get container status \"1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef\": rpc error: code = NotFound desc = could not find container \"1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef\": container with ID starting with 1bb476f1e1d9a54e015b035e1dd6ffbdbb34906a41917485ecf2921d9accccef not found: ID does not exist" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.082074 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.082397 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.083483 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-kube-api-access-8czzd" (OuterVolumeSpecName: "kube-api-access-8czzd") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "kube-api-access-8czzd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.083867 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.090347 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" (UID: "26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.158442 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.160933 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.161269 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.161555 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.161689 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.162714 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-audit-policies\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.162376 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.162318 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.163567 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-audit-policies\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.164372 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-session\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.164599 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.164730 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.164840 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-template-login\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.165029 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8133747d-754e-44f8-b93f-b3a85d19b3cc-audit-dir\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.165164 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w495m\" (UniqueName: \"kubernetes.io/projected/8133747d-754e-44f8-b93f-b3a85d19b3cc-kube-api-access-w495m\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.165337 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.165684 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-template-error\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.166266 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.166496 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.166663 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.166810 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.167131 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.169441 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.169643 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.167810 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.168069 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-template-login\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.168142 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-session\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.165546 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8133747d-754e-44f8-b93f-b3a85d19b3cc-audit-dir\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.167383 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.170683 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.170707 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8czzd\" (UniqueName: \"kubernetes.io/projected/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-kube-api-access-8czzd\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.170722 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.170734 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.170747 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.170763 5119 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.170775 5119 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.170787 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.170800 5119 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.172519 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.172913 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-user-template-error\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.175228 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8133747d-754e-44f8-b93f-b3a85d19b3cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.184198 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w495m\" (UniqueName: \"kubernetes.io/projected/8133747d-754e-44f8-b93f-b3a85d19b3cc-kube-api-access-w495m\") pod \"oauth-openshift-89495cdd5-xgsp4\" (UID: \"8133747d-754e-44f8-b93f-b3a85d19b3cc\") " pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.356084 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7tls5"] Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.360471 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-7tls5"] Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.364218 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.596569 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b" path="/var/lib/kubelet/pods/26ecbc17-f0fb-4cc5-90a3-cb70af5ac44b/volumes" Jan 21 10:00:10 crc kubenswrapper[5119]: I0121 10:00:10.768399 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-89495cdd5-xgsp4"] Jan 21 10:00:10 crc kubenswrapper[5119]: W0121 10:00:10.770711 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8133747d_754e_44f8_b93f_b3a85d19b3cc.slice/crio-8faab42829d071c74a607f978e66b7c4df8f264a9d9386b17f483d4867206270 WatchSource:0}: Error finding container 8faab42829d071c74a607f978e66b7c4df8f264a9d9386b17f483d4867206270: Status 404 returned error can't find the container with id 8faab42829d071c74a607f978e66b7c4df8f264a9d9386b17f483d4867206270 Jan 21 10:00:11 crc kubenswrapper[5119]: I0121 10:00:11.046796 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" event={"ID":"8133747d-754e-44f8-b93f-b3a85d19b3cc","Type":"ContainerStarted","Data":"8faab42829d071c74a607f978e66b7c4df8f264a9d9386b17f483d4867206270"} Jan 21 10:00:12 crc kubenswrapper[5119]: I0121 10:00:12.056668 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" event={"ID":"8133747d-754e-44f8-b93f-b3a85d19b3cc","Type":"ContainerStarted","Data":"2dafa0f105b22442f93dbdb21a5defb3a782c598ce29254bcc9bb404846e81bf"} Jan 21 10:00:12 crc kubenswrapper[5119]: I0121 10:00:12.056972 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:12 crc kubenswrapper[5119]: I0121 10:00:12.061740 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" Jan 21 10:00:12 crc kubenswrapper[5119]: I0121 10:00:12.074257 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-89495cdd5-xgsp4" podStartSLOduration=28.07424248 podStartE2EDuration="28.07424248s" podCreationTimestamp="2026-01-21 09:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:00:12.073804558 +0000 UTC m=+327.741896256" watchObservedRunningTime="2026-01-21 10:00:12.07424248 +0000 UTC m=+327.742334158" Jan 21 10:00:12 crc kubenswrapper[5119]: I0121 10:00:12.624579 5119 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 10:00:16 crc kubenswrapper[5119]: E0121 10:00:16.727837 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee0294ff_f61f_492b_b738_fbbee8f757eb.slice/crio-49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a.scope\": RecentStats: unable to find data in memory cache]" Jan 21 10:00:17 crc kubenswrapper[5119]: I0121 10:00:17.963703 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-vxrkh" Jan 21 10:00:18 crc kubenswrapper[5119]: I0121 10:00:18.029444 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-94gcl"] Jan 21 10:00:26 crc kubenswrapper[5119]: E0121 10:00:26.879893 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee0294ff_f61f_492b_b738_fbbee8f757eb.slice/crio-49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a.scope\": RecentStats: unable to find data in memory cache]" Jan 21 10:00:36 crc kubenswrapper[5119]: E0121 10:00:36.984301 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee0294ff_f61f_492b_b738_fbbee8f757eb.slice/crio-49a6bf6ae56dfc316394a3f87c6c306e64599a06b80212271663b64264194b7a.scope\": RecentStats: unable to find data in memory cache]" Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.081910 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" podUID="7c21f56d-7f02-4bb3-bc7e-82b4d990e381" containerName="registry" containerID="cri-o://0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d" gracePeriod=30 Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.933278 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.995066 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-tls\") pod \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.995143 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-ca-trust-extracted\") pod \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.995203 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-certificates\") pod \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.995264 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-trusted-ca\") pod \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.995288 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-bound-sa-token\") pod \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.995331 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvrbm\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-kube-api-access-lvrbm\") pod \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.995381 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-installation-pull-secrets\") pod \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.995480 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\" (UID: \"7c21f56d-7f02-4bb3-bc7e-82b4d990e381\") " Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.996520 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7c21f56d-7f02-4bb3-bc7e-82b4d990e381" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:00:43 crc kubenswrapper[5119]: I0121 10:00:43.997786 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7c21f56d-7f02-4bb3-bc7e-82b4d990e381" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.001039 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7c21f56d-7f02-4bb3-bc7e-82b4d990e381" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.001111 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7c21f56d-7f02-4bb3-bc7e-82b4d990e381" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.001767 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7c21f56d-7f02-4bb3-bc7e-82b4d990e381" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.004527 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-kube-api-access-lvrbm" (OuterVolumeSpecName: "kube-api-access-lvrbm") pod "7c21f56d-7f02-4bb3-bc7e-82b4d990e381" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381"). InnerVolumeSpecName "kube-api-access-lvrbm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.007286 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "7c21f56d-7f02-4bb3-bc7e-82b4d990e381" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.012180 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7c21f56d-7f02-4bb3-bc7e-82b4d990e381" (UID: "7c21f56d-7f02-4bb3-bc7e-82b4d990e381"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.096831 5119 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.096876 5119 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.096887 5119 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.096896 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lvrbm\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-kube-api-access-lvrbm\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.096905 5119 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.096916 5119 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.096930 5119 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7c21f56d-7f02-4bb3-bc7e-82b4d990e381-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.398710 5119 generic.go:358] "Generic (PLEG): container finished" podID="7c21f56d-7f02-4bb3-bc7e-82b4d990e381" containerID="0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d" exitCode=0 Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.398781 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.398798 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" event={"ID":"7c21f56d-7f02-4bb3-bc7e-82b4d990e381","Type":"ContainerDied","Data":"0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d"} Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.398834 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-94gcl" event={"ID":"7c21f56d-7f02-4bb3-bc7e-82b4d990e381","Type":"ContainerDied","Data":"387349e9b44a21dbfd81e00c4de987153a2d13f74f2cb776de7e06c8afe54a4c"} Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.398850 5119 scope.go:117] "RemoveContainer" containerID="0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.416515 5119 scope.go:117] "RemoveContainer" containerID="0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d" Jan 21 10:00:44 crc kubenswrapper[5119]: E0121 10:00:44.417204 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d\": container with ID starting with 0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d not found: ID does not exist" containerID="0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.417337 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d"} err="failed to get container status \"0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d\": rpc error: code = NotFound desc = could not find container \"0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d\": container with ID starting with 0fcbe996c79ea8d19e08c761140b51257cf65f78566c2b06b9ca8ec0084d446d not found: ID does not exist" Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.433593 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-94gcl"] Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.437187 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-94gcl"] Jan 21 10:00:44 crc kubenswrapper[5119]: I0121 10:00:44.597904 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c21f56d-7f02-4bb3-bc7e-82b4d990e381" path="/var/lib/kubelet/pods/7c21f56d-7f02-4bb3-bc7e-82b4d990e381/volumes" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.143756 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483162-pzxwx"] Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.145468 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c21f56d-7f02-4bb3-bc7e-82b4d990e381" containerName="registry" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.145488 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c21f56d-7f02-4bb3-bc7e-82b4d990e381" containerName="registry" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.145644 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="7c21f56d-7f02-4bb3-bc7e-82b4d990e381" containerName="registry" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.162221 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483162-pzxwx"] Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.162403 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483162-pzxwx" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.165128 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.165464 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.165498 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.255389 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6krwm\" (UniqueName: \"kubernetes.io/projected/5d98510c-550d-49f1-a9f2-e7457a41988d-kube-api-access-6krwm\") pod \"auto-csr-approver-29483162-pzxwx\" (UID: \"5d98510c-550d-49f1-a9f2-e7457a41988d\") " pod="openshift-infra/auto-csr-approver-29483162-pzxwx" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.356625 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6krwm\" (UniqueName: \"kubernetes.io/projected/5d98510c-550d-49f1-a9f2-e7457a41988d-kube-api-access-6krwm\") pod \"auto-csr-approver-29483162-pzxwx\" (UID: \"5d98510c-550d-49f1-a9f2-e7457a41988d\") " pod="openshift-infra/auto-csr-approver-29483162-pzxwx" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.380674 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6krwm\" (UniqueName: \"kubernetes.io/projected/5d98510c-550d-49f1-a9f2-e7457a41988d-kube-api-access-6krwm\") pod \"auto-csr-approver-29483162-pzxwx\" (UID: \"5d98510c-550d-49f1-a9f2-e7457a41988d\") " pod="openshift-infra/auto-csr-approver-29483162-pzxwx" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.482013 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483162-pzxwx" Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.647263 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483162-pzxwx"] Jan 21 10:02:00 crc kubenswrapper[5119]: I0121 10:02:00.851258 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483162-pzxwx" event={"ID":"5d98510c-550d-49f1-a9f2-e7457a41988d","Type":"ContainerStarted","Data":"54122060a4de65181aa47bbc83d9020db77b9513915b6937069061788ef1ab64"} Jan 21 10:02:02 crc kubenswrapper[5119]: I0121 10:02:02.864485 5119 generic.go:358] "Generic (PLEG): container finished" podID="5d98510c-550d-49f1-a9f2-e7457a41988d" containerID="ce6d3a7102ca8fdadb65c797d828d03c7bf0cd84dfea82c4338daf6b938cfd95" exitCode=0 Jan 21 10:02:02 crc kubenswrapper[5119]: I0121 10:02:02.864665 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483162-pzxwx" event={"ID":"5d98510c-550d-49f1-a9f2-e7457a41988d","Type":"ContainerDied","Data":"ce6d3a7102ca8fdadb65c797d828d03c7bf0cd84dfea82c4338daf6b938cfd95"} Jan 21 10:02:04 crc kubenswrapper[5119]: I0121 10:02:04.148054 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483162-pzxwx" Jan 21 10:02:04 crc kubenswrapper[5119]: I0121 10:02:04.306892 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6krwm\" (UniqueName: \"kubernetes.io/projected/5d98510c-550d-49f1-a9f2-e7457a41988d-kube-api-access-6krwm\") pod \"5d98510c-550d-49f1-a9f2-e7457a41988d\" (UID: \"5d98510c-550d-49f1-a9f2-e7457a41988d\") " Jan 21 10:02:04 crc kubenswrapper[5119]: I0121 10:02:04.312364 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d98510c-550d-49f1-a9f2-e7457a41988d-kube-api-access-6krwm" (OuterVolumeSpecName: "kube-api-access-6krwm") pod "5d98510c-550d-49f1-a9f2-e7457a41988d" (UID: "5d98510c-550d-49f1-a9f2-e7457a41988d"). InnerVolumeSpecName "kube-api-access-6krwm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:02:04 crc kubenswrapper[5119]: I0121 10:02:04.408688 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6krwm\" (UniqueName: \"kubernetes.io/projected/5d98510c-550d-49f1-a9f2-e7457a41988d-kube-api-access-6krwm\") on node \"crc\" DevicePath \"\"" Jan 21 10:02:04 crc kubenswrapper[5119]: I0121 10:02:04.875749 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483162-pzxwx" Jan 21 10:02:04 crc kubenswrapper[5119]: I0121 10:02:04.875748 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483162-pzxwx" event={"ID":"5d98510c-550d-49f1-a9f2-e7457a41988d","Type":"ContainerDied","Data":"54122060a4de65181aa47bbc83d9020db77b9513915b6937069061788ef1ab64"} Jan 21 10:02:04 crc kubenswrapper[5119]: I0121 10:02:04.875868 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54122060a4de65181aa47bbc83d9020db77b9513915b6937069061788ef1ab64" Jan 21 10:02:19 crc kubenswrapper[5119]: I0121 10:02:19.918574 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:02:19 crc kubenswrapper[5119]: I0121 10:02:19.919201 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:02:49 crc kubenswrapper[5119]: I0121 10:02:49.918774 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:02:49 crc kubenswrapper[5119]: I0121 10:02:49.919414 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:03:19 crc kubenswrapper[5119]: I0121 10:03:19.918904 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:03:19 crc kubenswrapper[5119]: I0121 10:03:19.919559 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:03:19 crc kubenswrapper[5119]: I0121 10:03:19.919660 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:03:19 crc kubenswrapper[5119]: I0121 10:03:19.920547 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ccf407c7bf9fef5463fbdd0f20c4692fd497cff47399ce80319fb6eadef27ee1"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:03:19 crc kubenswrapper[5119]: I0121 10:03:19.920681 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://ccf407c7bf9fef5463fbdd0f20c4692fd497cff47399ce80319fb6eadef27ee1" gracePeriod=600 Jan 21 10:03:20 crc kubenswrapper[5119]: I0121 10:03:20.335696 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="ccf407c7bf9fef5463fbdd0f20c4692fd497cff47399ce80319fb6eadef27ee1" exitCode=0 Jan 21 10:03:20 crc kubenswrapper[5119]: I0121 10:03:20.335805 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"ccf407c7bf9fef5463fbdd0f20c4692fd497cff47399ce80319fb6eadef27ee1"} Jan 21 10:03:20 crc kubenswrapper[5119]: I0121 10:03:20.335879 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"0fcd4683c1f89fdf153473f37b8eee4ecff9e78a85c4e6b63e9902ab31d6f4a8"} Jan 21 10:03:20 crc kubenswrapper[5119]: I0121 10:03:20.335910 5119 scope.go:117] "RemoveContainer" containerID="d40e23f49b5ccbd79a7b7631bfb1923bda39e2fe75a27231461f8d5e6aec28b1" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.128345 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483164-2hfvk"] Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.130945 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d98510c-550d-49f1-a9f2-e7457a41988d" containerName="oc" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.131051 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d98510c-550d-49f1-a9f2-e7457a41988d" containerName="oc" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.131262 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="5d98510c-550d-49f1-a9f2-e7457a41988d" containerName="oc" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.140797 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483164-2hfvk"] Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.140926 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483164-2hfvk" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.143409 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.144309 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.144310 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.204953 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9ws2\" (UniqueName: \"kubernetes.io/projected/acfe51cb-4322-420b-bbbb-de502ae4c2f6-kube-api-access-g9ws2\") pod \"auto-csr-approver-29483164-2hfvk\" (UID: \"acfe51cb-4322-420b-bbbb-de502ae4c2f6\") " pod="openshift-infra/auto-csr-approver-29483164-2hfvk" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.306714 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g9ws2\" (UniqueName: \"kubernetes.io/projected/acfe51cb-4322-420b-bbbb-de502ae4c2f6-kube-api-access-g9ws2\") pod \"auto-csr-approver-29483164-2hfvk\" (UID: \"acfe51cb-4322-420b-bbbb-de502ae4c2f6\") " pod="openshift-infra/auto-csr-approver-29483164-2hfvk" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.330736 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9ws2\" (UniqueName: \"kubernetes.io/projected/acfe51cb-4322-420b-bbbb-de502ae4c2f6-kube-api-access-g9ws2\") pod \"auto-csr-approver-29483164-2hfvk\" (UID: \"acfe51cb-4322-420b-bbbb-de502ae4c2f6\") " pod="openshift-infra/auto-csr-approver-29483164-2hfvk" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.463375 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483164-2hfvk" Jan 21 10:04:00 crc kubenswrapper[5119]: I0121 10:04:00.895648 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483164-2hfvk"] Jan 21 10:04:00 crc kubenswrapper[5119]: W0121 10:04:00.905227 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacfe51cb_4322_420b_bbbb_de502ae4c2f6.slice/crio-ad838b50d4360d30870c14597cdcd5af3abb3ddf1653bf28cc8ed1071d2aa624 WatchSource:0}: Error finding container ad838b50d4360d30870c14597cdcd5af3abb3ddf1653bf28cc8ed1071d2aa624: Status 404 returned error can't find the container with id ad838b50d4360d30870c14597cdcd5af3abb3ddf1653bf28cc8ed1071d2aa624 Jan 21 10:04:01 crc kubenswrapper[5119]: I0121 10:04:01.608762 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483164-2hfvk" event={"ID":"acfe51cb-4322-420b-bbbb-de502ae4c2f6","Type":"ContainerStarted","Data":"ad838b50d4360d30870c14597cdcd5af3abb3ddf1653bf28cc8ed1071d2aa624"} Jan 21 10:04:02 crc kubenswrapper[5119]: I0121 10:04:02.615765 5119 generic.go:358] "Generic (PLEG): container finished" podID="acfe51cb-4322-420b-bbbb-de502ae4c2f6" containerID="3862bf87328c74f16a513eeb7d8ce8aeca2c4fe2745d75fdc8c458c32a83cd2c" exitCode=0 Jan 21 10:04:02 crc kubenswrapper[5119]: I0121 10:04:02.615827 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483164-2hfvk" event={"ID":"acfe51cb-4322-420b-bbbb-de502ae4c2f6","Type":"ContainerDied","Data":"3862bf87328c74f16a513eeb7d8ce8aeca2c4fe2745d75fdc8c458c32a83cd2c"} Jan 21 10:04:03 crc kubenswrapper[5119]: I0121 10:04:03.803109 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483164-2hfvk" Jan 21 10:04:03 crc kubenswrapper[5119]: I0121 10:04:03.947352 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9ws2\" (UniqueName: \"kubernetes.io/projected/acfe51cb-4322-420b-bbbb-de502ae4c2f6-kube-api-access-g9ws2\") pod \"acfe51cb-4322-420b-bbbb-de502ae4c2f6\" (UID: \"acfe51cb-4322-420b-bbbb-de502ae4c2f6\") " Jan 21 10:04:03 crc kubenswrapper[5119]: I0121 10:04:03.953264 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acfe51cb-4322-420b-bbbb-de502ae4c2f6-kube-api-access-g9ws2" (OuterVolumeSpecName: "kube-api-access-g9ws2") pod "acfe51cb-4322-420b-bbbb-de502ae4c2f6" (UID: "acfe51cb-4322-420b-bbbb-de502ae4c2f6"). InnerVolumeSpecName "kube-api-access-g9ws2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:04:04 crc kubenswrapper[5119]: I0121 10:04:04.048732 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g9ws2\" (UniqueName: \"kubernetes.io/projected/acfe51cb-4322-420b-bbbb-de502ae4c2f6-kube-api-access-g9ws2\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:04 crc kubenswrapper[5119]: I0121 10:04:04.626954 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483164-2hfvk" event={"ID":"acfe51cb-4322-420b-bbbb-de502ae4c2f6","Type":"ContainerDied","Data":"ad838b50d4360d30870c14597cdcd5af3abb3ddf1653bf28cc8ed1071d2aa624"} Jan 21 10:04:04 crc kubenswrapper[5119]: I0121 10:04:04.626989 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad838b50d4360d30870c14597cdcd5af3abb3ddf1653bf28cc8ed1071d2aa624" Jan 21 10:04:04 crc kubenswrapper[5119]: I0121 10:04:04.626997 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483164-2hfvk" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.541930 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv"] Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.542702 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" podUID="766a5e24-f953-49f2-b732-1a783ea97e3f" containerName="kube-rbac-proxy" containerID="cri-o://f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc" gracePeriod=30 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.542778 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" podUID="766a5e24-f953-49f2-b732-1a783ea97e3f" containerName="ovnkube-cluster-manager" containerID="cri-o://5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3" gracePeriod=30 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.740506 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.767132 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lnxvl"] Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.767830 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovn-controller" containerID="cri-o://1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459" gracePeriod=30 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.767960 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622" gracePeriod=30 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.767955 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="nbdb" containerID="cri-o://0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc" gracePeriod=30 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.767995 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="sbdb" containerID="cri-o://9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8" gracePeriod=30 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.768036 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="northd" containerID="cri-o://60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345" gracePeriod=30 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.768130 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovn-acl-logging" containerID="cri-o://6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8" gracePeriod=30 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.768196 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="kube-rbac-proxy-node" containerID="cri-o://1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e" gracePeriod=30 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.771385 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj"] Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.771929 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="acfe51cb-4322-420b-bbbb-de502ae4c2f6" containerName="oc" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.771947 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfe51cb-4322-420b-bbbb-de502ae4c2f6" containerName="oc" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.771959 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="766a5e24-f953-49f2-b732-1a783ea97e3f" containerName="ovnkube-cluster-manager" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.771964 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="766a5e24-f953-49f2-b732-1a783ea97e3f" containerName="ovnkube-cluster-manager" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.771976 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="766a5e24-f953-49f2-b732-1a783ea97e3f" containerName="kube-rbac-proxy" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.771981 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="766a5e24-f953-49f2-b732-1a783ea97e3f" containerName="kube-rbac-proxy" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.772068 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="acfe51cb-4322-420b-bbbb-de502ae4c2f6" containerName="oc" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.772077 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="766a5e24-f953-49f2-b732-1a783ea97e3f" containerName="ovnkube-cluster-manager" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.772084 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="766a5e24-f953-49f2-b732-1a783ea97e3f" containerName="kube-rbac-proxy" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.777529 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.794884 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovnkube-controller" containerID="cri-o://3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546" gracePeriod=30 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.832412 5119 generic.go:358] "Generic (PLEG): container finished" podID="766a5e24-f953-49f2-b732-1a783ea97e3f" containerID="5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3" exitCode=0 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.832443 5119 generic.go:358] "Generic (PLEG): container finished" podID="766a5e24-f953-49f2-b732-1a783ea97e3f" containerID="f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc" exitCode=0 Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.832457 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" event={"ID":"766a5e24-f953-49f2-b732-1a783ea97e3f","Type":"ContainerDied","Data":"5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3"} Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.832500 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.832526 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" event={"ID":"766a5e24-f953-49f2-b732-1a783ea97e3f","Type":"ContainerDied","Data":"f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc"} Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.832541 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv" event={"ID":"766a5e24-f953-49f2-b732-1a783ea97e3f","Type":"ContainerDied","Data":"7647709b703c341792cbbbf669bdafc71596df1ac074149af0a2d16bb250099b"} Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.832560 5119 scope.go:117] "RemoveContainer" containerID="5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.845072 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-ovnkube-config\") pod \"766a5e24-f953-49f2-b732-1a783ea97e3f\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.845138 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btx85\" (UniqueName: \"kubernetes.io/projected/766a5e24-f953-49f2-b732-1a783ea97e3f-kube-api-access-btx85\") pod \"766a5e24-f953-49f2-b732-1a783ea97e3f\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.845256 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/766a5e24-f953-49f2-b732-1a783ea97e3f-ovn-control-plane-metrics-cert\") pod \"766a5e24-f953-49f2-b732-1a783ea97e3f\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.845291 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-env-overrides\") pod \"766a5e24-f953-49f2-b732-1a783ea97e3f\" (UID: \"766a5e24-f953-49f2-b732-1a783ea97e3f\") " Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.846002 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "766a5e24-f953-49f2-b732-1a783ea97e3f" (UID: "766a5e24-f953-49f2-b732-1a783ea97e3f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.846100 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "766a5e24-f953-49f2-b732-1a783ea97e3f" (UID: "766a5e24-f953-49f2-b732-1a783ea97e3f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.851365 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766a5e24-f953-49f2-b732-1a783ea97e3f-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "766a5e24-f953-49f2-b732-1a783ea97e3f" (UID: "766a5e24-f953-49f2-b732-1a783ea97e3f"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.853150 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766a5e24-f953-49f2-b732-1a783ea97e3f-kube-api-access-btx85" (OuterVolumeSpecName: "kube-api-access-btx85") pod "766a5e24-f953-49f2-b732-1a783ea97e3f" (UID: "766a5e24-f953-49f2-b732-1a783ea97e3f"). InnerVolumeSpecName "kube-api-access-btx85". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.916190 5119 scope.go:117] "RemoveContainer" containerID="f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.929257 5119 scope.go:117] "RemoveContainer" containerID="5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3" Jan 21 10:04:40 crc kubenswrapper[5119]: E0121 10:04:40.929708 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3\": container with ID starting with 5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3 not found: ID does not exist" containerID="5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.929742 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3"} err="failed to get container status \"5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3\": rpc error: code = NotFound desc = could not find container \"5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3\": container with ID starting with 5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3 not found: ID does not exist" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.929768 5119 scope.go:117] "RemoveContainer" containerID="f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc" Jan 21 10:04:40 crc kubenswrapper[5119]: E0121 10:04:40.929977 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc\": container with ID starting with f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc not found: ID does not exist" containerID="f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.930004 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc"} err="failed to get container status \"f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc\": rpc error: code = NotFound desc = could not find container \"f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc\": container with ID starting with f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc not found: ID does not exist" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.930019 5119 scope.go:117] "RemoveContainer" containerID="5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.930246 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3"} err="failed to get container status \"5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3\": rpc error: code = NotFound desc = could not find container \"5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3\": container with ID starting with 5db6aa37f593c3cc95efad0c2d4844c706ccdebd4a245723f7db8072eb6551b3 not found: ID does not exist" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.930268 5119 scope.go:117] "RemoveContainer" containerID="f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.930458 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc"} err="failed to get container status \"f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc\": rpc error: code = NotFound desc = could not find container \"f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc\": container with ID starting with f897ff110499e10b426ada26c97d71a54ede29287a1d163998530c73051c54cc not found: ID does not exist" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.946260 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74afbc02-7062-4cee-8845-fbb7e82bf96b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.946299 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kvmq\" (UniqueName: \"kubernetes.io/projected/74afbc02-7062-4cee-8845-fbb7e82bf96b-kube-api-access-7kvmq\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.946346 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74afbc02-7062-4cee-8845-fbb7e82bf96b-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.946366 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74afbc02-7062-4cee-8845-fbb7e82bf96b-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.946431 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.946441 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-btx85\" (UniqueName: \"kubernetes.io/projected/766a5e24-f953-49f2-b732-1a783ea97e3f-kube-api-access-btx85\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.946451 5119 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/766a5e24-f953-49f2-b732-1a783ea97e3f-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:40 crc kubenswrapper[5119]: I0121 10:04:40.946459 5119 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/766a5e24-f953-49f2-b732-1a783ea97e3f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.047479 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74afbc02-7062-4cee-8845-fbb7e82bf96b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.047557 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7kvmq\" (UniqueName: \"kubernetes.io/projected/74afbc02-7062-4cee-8845-fbb7e82bf96b-kube-api-access-7kvmq\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.047688 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74afbc02-7062-4cee-8845-fbb7e82bf96b-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.047732 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74afbc02-7062-4cee-8845-fbb7e82bf96b-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.048694 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74afbc02-7062-4cee-8845-fbb7e82bf96b-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.049718 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74afbc02-7062-4cee-8845-fbb7e82bf96b-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.051820 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74afbc02-7062-4cee-8845-fbb7e82bf96b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.068011 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kvmq\" (UniqueName: \"kubernetes.io/projected/74afbc02-7062-4cee-8845-fbb7e82bf96b-kube-api-access-7kvmq\") pod \"ovnkube-control-plane-97c9b6c48-k8nhj\" (UID: \"74afbc02-7062-4cee-8845-fbb7e82bf96b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.107002 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.111258 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lnxvl_8726e82a-1e7a-48e2-b1f0-4e34b17b37be/ovn-acl-logging/0.log" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.111732 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lnxvl_8726e82a-1e7a-48e2-b1f0-4e34b17b37be/ovn-controller/0.log" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.112216 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.174772 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cjvt8"] Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.175889 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="sbdb" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.175912 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="sbdb" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.175937 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="nbdb" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.175945 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="nbdb" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.175957 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="kube-rbac-proxy-node" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.175964 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="kube-rbac-proxy-node" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.175973 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="northd" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.175980 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="northd" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.175989 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="kubecfg-setup" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.175996 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="kubecfg-setup" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176009 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovnkube-controller" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176015 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovnkube-controller" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176024 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176032 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176042 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovn-acl-logging" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176049 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovn-acl-logging" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176067 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovn-controller" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176074 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovn-controller" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176165 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="northd" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176179 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="nbdb" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176187 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="sbdb" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176198 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovn-acl-logging" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176209 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovn-controller" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176217 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176224 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="kube-rbac-proxy-node" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.176274 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerName="ovnkube-controller" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.184938 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv"] Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.185191 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.185718 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-wkwlv"] Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249322 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-node-log\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249374 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-bin\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249388 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-systemd\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249421 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-var-lib-openvswitch\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249449 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhxd4\" (UniqueName: \"kubernetes.io/projected/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-kube-api-access-qhxd4\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249467 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-systemd-units\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249503 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-openvswitch\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249537 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovn-node-metrics-cert\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249553 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-log-socket\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249580 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-config\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249595 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-script-lib\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249657 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-slash\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249696 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249739 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-ovn\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249755 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-kubelet\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249782 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249810 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249860 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-node-log" (OuterVolumeSpecName: "node-log") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.249888 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250339 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250390 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-slash" (OuterVolumeSpecName: "host-slash") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250413 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250433 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250457 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250669 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250675 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250704 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-netns\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250720 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-log-socket" (OuterVolumeSpecName: "log-socket") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250735 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-env-overrides\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250744 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250781 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-ovn-kubernetes\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250811 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-etc-openvswitch\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250831 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-netd\") pod \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\" (UID: \"8726e82a-1e7a-48e2-b1f0-4e34b17b37be\") " Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250861 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.250997 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251023 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251043 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251272 5119 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251307 5119 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251319 5119 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251334 5119 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251348 5119 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251359 5119 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251370 5119 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251390 5119 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251402 5119 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251414 5119 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251425 5119 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251435 5119 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251442 5119 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251450 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251457 5119 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.251465 5119 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.253660 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.254327 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-kube-api-access-qhxd4" (OuterVolumeSpecName: "kube-api-access-qhxd4") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "kube-api-access-qhxd4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.261547 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8726e82a-1e7a-48e2-b1f0-4e34b17b37be" (UID: "8726e82a-1e7a-48e2-b1f0-4e34b17b37be"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.352750 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-systemd-units\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.352814 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-kubelet\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.352834 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-var-lib-openvswitch\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.352869 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/340b0b43-e24e-430d-8694-0a2e6ad12e0f-ovnkube-script-lib\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.352888 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/340b0b43-e24e-430d-8694-0a2e6ad12e0f-ovn-node-metrics-cert\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.352905 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-etc-openvswitch\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.352952 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-run-openvswitch\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.352994 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmw8z\" (UniqueName: \"kubernetes.io/projected/340b0b43-e24e-430d-8694-0a2e6ad12e0f-kube-api-access-nmw8z\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353015 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-cni-netd\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353033 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-cni-bin\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353131 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/340b0b43-e24e-430d-8694-0a2e6ad12e0f-ovnkube-config\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353217 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-run-ovn-kubernetes\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353263 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-node-log\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353310 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353381 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-run-ovn\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353411 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-run-systemd\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353447 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-log-socket\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353516 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-slash\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353597 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-run-netns\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353642 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/340b0b43-e24e-430d-8694-0a2e6ad12e0f-env-overrides\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353724 5119 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353742 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qhxd4\" (UniqueName: \"kubernetes.io/projected/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-kube-api-access-qhxd4\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353757 5119 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.353770 5119 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8726e82a-1e7a-48e2-b1f0-4e34b17b37be-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.454970 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-node-log\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455016 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455047 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-run-ovn\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455065 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-run-systemd\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455101 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-run-ovn\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455099 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-node-log\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455128 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-log-socket\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455130 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455144 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-run-systemd\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455157 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-slash\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455178 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-slash\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455184 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-log-socket\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455185 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-run-netns\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455213 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-run-netns\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455231 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/340b0b43-e24e-430d-8694-0a2e6ad12e0f-env-overrides\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455255 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-systemd-units\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455329 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-kubelet\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455368 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-var-lib-openvswitch\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455398 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-systemd-units\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455408 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/340b0b43-e24e-430d-8694-0a2e6ad12e0f-ovnkube-script-lib\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455429 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-var-lib-openvswitch\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455432 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-kubelet\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455447 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/340b0b43-e24e-430d-8694-0a2e6ad12e0f-ovn-node-metrics-cert\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455475 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-etc-openvswitch\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455508 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-run-openvswitch\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455530 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nmw8z\" (UniqueName: \"kubernetes.io/projected/340b0b43-e24e-430d-8694-0a2e6ad12e0f-kube-api-access-nmw8z\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455554 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-run-openvswitch\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455558 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-cni-netd\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455583 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-cni-netd\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455586 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-cni-bin\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455631 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-cni-bin\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455738 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-etc-openvswitch\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455794 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/340b0b43-e24e-430d-8694-0a2e6ad12e0f-ovnkube-config\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455836 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-run-ovn-kubernetes\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.455920 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/340b0b43-e24e-430d-8694-0a2e6ad12e0f-host-run-ovn-kubernetes\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.456003 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/340b0b43-e24e-430d-8694-0a2e6ad12e0f-env-overrides\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.456170 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/340b0b43-e24e-430d-8694-0a2e6ad12e0f-ovnkube-script-lib\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.456360 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/340b0b43-e24e-430d-8694-0a2e6ad12e0f-ovnkube-config\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.460009 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/340b0b43-e24e-430d-8694-0a2e6ad12e0f-ovn-node-metrics-cert\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.473249 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmw8z\" (UniqueName: \"kubernetes.io/projected/340b0b43-e24e-430d-8694-0a2e6ad12e0f-kube-api-access-nmw8z\") pod \"ovnkube-node-cjvt8\" (UID: \"340b0b43-e24e-430d-8694-0a2e6ad12e0f\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.499254 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:41 crc kubenswrapper[5119]: W0121 10:04:41.523787 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod340b0b43_e24e_430d_8694_0a2e6ad12e0f.slice/crio-2e60d5448a289c44c8a7f6a5a5c233ff0e8022c9e94c67f18ec40d05bfda1e0f WatchSource:0}: Error finding container 2e60d5448a289c44c8a7f6a5a5c233ff0e8022c9e94c67f18ec40d05bfda1e0f: Status 404 returned error can't find the container with id 2e60d5448a289c44c8a7f6a5a5c233ff0e8022c9e94c67f18ec40d05bfda1e0f Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.840930 5119 generic.go:358] "Generic (PLEG): container finished" podID="340b0b43-e24e-430d-8694-0a2e6ad12e0f" containerID="7db704e6ba4b4e9cf6809739e0ee93ee64a36b82e3481b789f4c677d7e5c1f3b" exitCode=0 Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.841035 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" event={"ID":"340b0b43-e24e-430d-8694-0a2e6ad12e0f","Type":"ContainerDied","Data":"7db704e6ba4b4e9cf6809739e0ee93ee64a36b82e3481b789f4c677d7e5c1f3b"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.841091 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" event={"ID":"340b0b43-e24e-430d-8694-0a2e6ad12e0f","Type":"ContainerStarted","Data":"2e60d5448a289c44c8a7f6a5a5c233ff0e8022c9e94c67f18ec40d05bfda1e0f"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.845011 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" event={"ID":"74afbc02-7062-4cee-8845-fbb7e82bf96b","Type":"ContainerStarted","Data":"996cd3d79f79e612a2fb77063b14ceed501c373d5aa2ff5ea350a62988cf377b"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.845063 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" event={"ID":"74afbc02-7062-4cee-8845-fbb7e82bf96b","Type":"ContainerStarted","Data":"5c76f1f3298fe3d138fb50d3bbf44b02582042e13037831a8abcf836cb106a5b"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.845075 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" event={"ID":"74afbc02-7062-4cee-8845-fbb7e82bf96b","Type":"ContainerStarted","Data":"a4eb8d2a451dcb69926376923c90c0095e11f1b41c474afa6e4606e5ee4efd97"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.852653 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lnxvl_8726e82a-1e7a-48e2-b1f0-4e34b17b37be/ovn-acl-logging/0.log" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853208 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lnxvl_8726e82a-1e7a-48e2-b1f0-4e34b17b37be/ovn-controller/0.log" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853574 5119 generic.go:358] "Generic (PLEG): container finished" podID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerID="3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546" exitCode=0 Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853612 5119 generic.go:358] "Generic (PLEG): container finished" podID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerID="9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8" exitCode=0 Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853621 5119 generic.go:358] "Generic (PLEG): container finished" podID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerID="0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc" exitCode=0 Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853628 5119 generic.go:358] "Generic (PLEG): container finished" podID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerID="60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345" exitCode=0 Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853637 5119 generic.go:358] "Generic (PLEG): container finished" podID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerID="f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622" exitCode=0 Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853643 5119 generic.go:358] "Generic (PLEG): container finished" podID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerID="1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e" exitCode=0 Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853649 5119 generic.go:358] "Generic (PLEG): container finished" podID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerID="6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8" exitCode=143 Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853655 5119 generic.go:358] "Generic (PLEG): container finished" podID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" containerID="1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459" exitCode=143 Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853618 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerDied","Data":"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853741 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerDied","Data":"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853755 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerDied","Data":"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853766 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerDied","Data":"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853776 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853793 5119 scope.go:117] "RemoveContainer" containerID="3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853777 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerDied","Data":"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853902 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerDied","Data":"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853922 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853933 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853939 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853946 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerDied","Data":"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853954 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853960 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853965 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853970 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853975 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853980 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853985 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853990 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.853995 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854003 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerDied","Data":"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854011 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854017 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854022 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854027 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854032 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854037 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854042 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854046 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854051 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854059 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lnxvl" event={"ID":"8726e82a-1e7a-48e2-b1f0-4e34b17b37be","Type":"ContainerDied","Data":"aceefea6105e80341af015245c60666025ce058c3ee6eddabca665eb964a9d2b"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854068 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854074 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854079 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854083 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854088 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854092 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854097 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854103 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.854107 5119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.855677 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.855722 5119 generic.go:358] "Generic (PLEG): container finished" podID="c3c35acb-afad-4124-a4e6-bf36f963ecbf" containerID="312f4cc68d22ceb0482ea69403845198dce304a803e2deb6620de418d8dc6b35" exitCode=2 Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.855776 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7d4r9" event={"ID":"c3c35acb-afad-4124-a4e6-bf36f963ecbf","Type":"ContainerDied","Data":"312f4cc68d22ceb0482ea69403845198dce304a803e2deb6620de418d8dc6b35"} Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.856202 5119 scope.go:117] "RemoveContainer" containerID="312f4cc68d22ceb0482ea69403845198dce304a803e2deb6620de418d8dc6b35" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.876331 5119 scope.go:117] "RemoveContainer" containerID="9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.906056 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-k8nhj" podStartSLOduration=1.90603706 podStartE2EDuration="1.90603706s" podCreationTimestamp="2026-01-21 10:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:04:41.902343772 +0000 UTC m=+597.570435470" watchObservedRunningTime="2026-01-21 10:04:41.90603706 +0000 UTC m=+597.574128738" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.917022 5119 scope.go:117] "RemoveContainer" containerID="0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.922206 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lnxvl"] Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.927321 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lnxvl"] Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.953567 5119 scope.go:117] "RemoveContainer" containerID="60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.981574 5119 scope.go:117] "RemoveContainer" containerID="f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622" Jan 21 10:04:41 crc kubenswrapper[5119]: I0121 10:04:41.998553 5119 scope.go:117] "RemoveContainer" containerID="1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.012958 5119 scope.go:117] "RemoveContainer" containerID="6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.030353 5119 scope.go:117] "RemoveContainer" containerID="1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.046272 5119 scope.go:117] "RemoveContainer" containerID="d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.060543 5119 scope.go:117] "RemoveContainer" containerID="3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546" Jan 21 10:04:42 crc kubenswrapper[5119]: E0121 10:04:42.061021 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546\": container with ID starting with 3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546 not found: ID does not exist" containerID="3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.061076 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546"} err="failed to get container status \"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546\": rpc error: code = NotFound desc = could not find container \"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546\": container with ID starting with 3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.061109 5119 scope.go:117] "RemoveContainer" containerID="9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8" Jan 21 10:04:42 crc kubenswrapper[5119]: E0121 10:04:42.061686 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8\": container with ID starting with 9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8 not found: ID does not exist" containerID="9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.061728 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8"} err="failed to get container status \"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8\": rpc error: code = NotFound desc = could not find container \"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8\": container with ID starting with 9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.061777 5119 scope.go:117] "RemoveContainer" containerID="0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc" Jan 21 10:04:42 crc kubenswrapper[5119]: E0121 10:04:42.062692 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc\": container with ID starting with 0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc not found: ID does not exist" containerID="0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.062737 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc"} err="failed to get container status \"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc\": rpc error: code = NotFound desc = could not find container \"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc\": container with ID starting with 0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.062764 5119 scope.go:117] "RemoveContainer" containerID="60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345" Jan 21 10:04:42 crc kubenswrapper[5119]: E0121 10:04:42.063587 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345\": container with ID starting with 60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345 not found: ID does not exist" containerID="60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.063640 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345"} err="failed to get container status \"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345\": rpc error: code = NotFound desc = could not find container \"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345\": container with ID starting with 60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.063666 5119 scope.go:117] "RemoveContainer" containerID="f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622" Jan 21 10:04:42 crc kubenswrapper[5119]: E0121 10:04:42.064009 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622\": container with ID starting with f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622 not found: ID does not exist" containerID="f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.064054 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622"} err="failed to get container status \"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622\": rpc error: code = NotFound desc = could not find container \"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622\": container with ID starting with f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.064080 5119 scope.go:117] "RemoveContainer" containerID="1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e" Jan 21 10:04:42 crc kubenswrapper[5119]: E0121 10:04:42.064374 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e\": container with ID starting with 1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e not found: ID does not exist" containerID="1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.064398 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e"} err="failed to get container status \"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e\": rpc error: code = NotFound desc = could not find container \"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e\": container with ID starting with 1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.064411 5119 scope.go:117] "RemoveContainer" containerID="6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8" Jan 21 10:04:42 crc kubenswrapper[5119]: E0121 10:04:42.064861 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8\": container with ID starting with 6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8 not found: ID does not exist" containerID="6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.064882 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8"} err="failed to get container status \"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8\": rpc error: code = NotFound desc = could not find container \"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8\": container with ID starting with 6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.064898 5119 scope.go:117] "RemoveContainer" containerID="1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459" Jan 21 10:04:42 crc kubenswrapper[5119]: E0121 10:04:42.065578 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459\": container with ID starting with 1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459 not found: ID does not exist" containerID="1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.065666 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459"} err="failed to get container status \"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459\": rpc error: code = NotFound desc = could not find container \"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459\": container with ID starting with 1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.065698 5119 scope.go:117] "RemoveContainer" containerID="d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311" Jan 21 10:04:42 crc kubenswrapper[5119]: E0121 10:04:42.066096 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311\": container with ID starting with d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311 not found: ID does not exist" containerID="d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.066162 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311"} err="failed to get container status \"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311\": rpc error: code = NotFound desc = could not find container \"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311\": container with ID starting with d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.066185 5119 scope.go:117] "RemoveContainer" containerID="3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.066659 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546"} err="failed to get container status \"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546\": rpc error: code = NotFound desc = could not find container \"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546\": container with ID starting with 3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.066680 5119 scope.go:117] "RemoveContainer" containerID="9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.067303 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8"} err="failed to get container status \"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8\": rpc error: code = NotFound desc = could not find container \"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8\": container with ID starting with 9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.067399 5119 scope.go:117] "RemoveContainer" containerID="0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.067722 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc"} err="failed to get container status \"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc\": rpc error: code = NotFound desc = could not find container \"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc\": container with ID starting with 0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.067747 5119 scope.go:117] "RemoveContainer" containerID="60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.067983 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345"} err="failed to get container status \"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345\": rpc error: code = NotFound desc = could not find container \"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345\": container with ID starting with 60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.067999 5119 scope.go:117] "RemoveContainer" containerID="f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.068188 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622"} err="failed to get container status \"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622\": rpc error: code = NotFound desc = could not find container \"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622\": container with ID starting with f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.068203 5119 scope.go:117] "RemoveContainer" containerID="1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.068406 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e"} err="failed to get container status \"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e\": rpc error: code = NotFound desc = could not find container \"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e\": container with ID starting with 1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.068419 5119 scope.go:117] "RemoveContainer" containerID="6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.068613 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8"} err="failed to get container status \"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8\": rpc error: code = NotFound desc = could not find container \"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8\": container with ID starting with 6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.068628 5119 scope.go:117] "RemoveContainer" containerID="1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.068877 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459"} err="failed to get container status \"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459\": rpc error: code = NotFound desc = could not find container \"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459\": container with ID starting with 1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.068892 5119 scope.go:117] "RemoveContainer" containerID="d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.069126 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311"} err="failed to get container status \"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311\": rpc error: code = NotFound desc = could not find container \"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311\": container with ID starting with d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.069140 5119 scope.go:117] "RemoveContainer" containerID="3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.069329 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546"} err="failed to get container status \"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546\": rpc error: code = NotFound desc = could not find container \"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546\": container with ID starting with 3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.069342 5119 scope.go:117] "RemoveContainer" containerID="9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.069527 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8"} err="failed to get container status \"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8\": rpc error: code = NotFound desc = could not find container \"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8\": container with ID starting with 9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.069541 5119 scope.go:117] "RemoveContainer" containerID="0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.069861 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc"} err="failed to get container status \"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc\": rpc error: code = NotFound desc = could not find container \"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc\": container with ID starting with 0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.069886 5119 scope.go:117] "RemoveContainer" containerID="60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.070069 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345"} err="failed to get container status \"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345\": rpc error: code = NotFound desc = could not find container \"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345\": container with ID starting with 60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.070085 5119 scope.go:117] "RemoveContainer" containerID="f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.070408 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622"} err="failed to get container status \"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622\": rpc error: code = NotFound desc = could not find container \"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622\": container with ID starting with f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.070423 5119 scope.go:117] "RemoveContainer" containerID="1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.070662 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e"} err="failed to get container status \"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e\": rpc error: code = NotFound desc = could not find container \"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e\": container with ID starting with 1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.070680 5119 scope.go:117] "RemoveContainer" containerID="6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.070896 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8"} err="failed to get container status \"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8\": rpc error: code = NotFound desc = could not find container \"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8\": container with ID starting with 6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.070912 5119 scope.go:117] "RemoveContainer" containerID="1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.071243 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459"} err="failed to get container status \"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459\": rpc error: code = NotFound desc = could not find container \"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459\": container with ID starting with 1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.071254 5119 scope.go:117] "RemoveContainer" containerID="d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.071468 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311"} err="failed to get container status \"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311\": rpc error: code = NotFound desc = could not find container \"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311\": container with ID starting with d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.071483 5119 scope.go:117] "RemoveContainer" containerID="3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.071705 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546"} err="failed to get container status \"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546\": rpc error: code = NotFound desc = could not find container \"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546\": container with ID starting with 3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.071720 5119 scope.go:117] "RemoveContainer" containerID="9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.071920 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8"} err="failed to get container status \"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8\": rpc error: code = NotFound desc = could not find container \"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8\": container with ID starting with 9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.071935 5119 scope.go:117] "RemoveContainer" containerID="0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.072099 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc"} err="failed to get container status \"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc\": rpc error: code = NotFound desc = could not find container \"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc\": container with ID starting with 0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.072114 5119 scope.go:117] "RemoveContainer" containerID="60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.072355 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345"} err="failed to get container status \"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345\": rpc error: code = NotFound desc = could not find container \"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345\": container with ID starting with 60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.072370 5119 scope.go:117] "RemoveContainer" containerID="f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.072549 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622"} err="failed to get container status \"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622\": rpc error: code = NotFound desc = could not find container \"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622\": container with ID starting with f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.072563 5119 scope.go:117] "RemoveContainer" containerID="1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.072796 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e"} err="failed to get container status \"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e\": rpc error: code = NotFound desc = could not find container \"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e\": container with ID starting with 1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.072819 5119 scope.go:117] "RemoveContainer" containerID="6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.073410 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8"} err="failed to get container status \"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8\": rpc error: code = NotFound desc = could not find container \"6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8\": container with ID starting with 6e17546a54e7548d6e536d6bd28c9a51d26b5acdf5339e71dd3f156ba4a8fdb8 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.073524 5119 scope.go:117] "RemoveContainer" containerID="1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.073904 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459"} err="failed to get container status \"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459\": rpc error: code = NotFound desc = could not find container \"1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459\": container with ID starting with 1b392b388d386060816de821e9ed6054f8a2a6ad4be4f732726c8c6265902459 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.073933 5119 scope.go:117] "RemoveContainer" containerID="d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.074237 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311"} err="failed to get container status \"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311\": rpc error: code = NotFound desc = could not find container \"d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311\": container with ID starting with d825889c474d00efb63b28b2b9031854522bdd0776f82dcb7ce523509b34b311 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.074269 5119 scope.go:117] "RemoveContainer" containerID="3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.074670 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546"} err="failed to get container status \"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546\": rpc error: code = NotFound desc = could not find container \"3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546\": container with ID starting with 3a236fc5cb1ea8da9eced546fd405b340e80715456fdad38ad5393feb5c89546 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.074715 5119 scope.go:117] "RemoveContainer" containerID="9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.074964 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8"} err="failed to get container status \"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8\": rpc error: code = NotFound desc = could not find container \"9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8\": container with ID starting with 9bef1135ad39af29ef421bea0738ff6a7c474c1b9871ae69fe6f28f7909b2bc8 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.074989 5119 scope.go:117] "RemoveContainer" containerID="0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.075207 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc"} err="failed to get container status \"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc\": rpc error: code = NotFound desc = could not find container \"0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc\": container with ID starting with 0fcfcec99826c130521de8207f992351308ad85aeb83b53bcbff6e85493444dc not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.075231 5119 scope.go:117] "RemoveContainer" containerID="60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.075466 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345"} err="failed to get container status \"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345\": rpc error: code = NotFound desc = could not find container \"60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345\": container with ID starting with 60c894515e44b6400bc7db2d1b3a22c03f7f7aa43716f6bafa3887dcc61d7345 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.075491 5119 scope.go:117] "RemoveContainer" containerID="f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.075711 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622"} err="failed to get container status \"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622\": rpc error: code = NotFound desc = could not find container \"f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622\": container with ID starting with f6e66a992d16976cbb32cca2c0bebc3145059e3cc3d55ee85a22ff5435544622 not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.075747 5119 scope.go:117] "RemoveContainer" containerID="1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.075958 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e"} err="failed to get container status \"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e\": rpc error: code = NotFound desc = could not find container \"1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e\": container with ID starting with 1145357ee11367a8d2aa9f423da5dbd744c97f21338039cfac6e3ea7b5de1a6e not found: ID does not exist" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.603849 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766a5e24-f953-49f2-b732-1a783ea97e3f" path="/var/lib/kubelet/pods/766a5e24-f953-49f2-b732-1a783ea97e3f/volumes" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.605203 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8726e82a-1e7a-48e2-b1f0-4e34b17b37be" path="/var/lib/kubelet/pods/8726e82a-1e7a-48e2-b1f0-4e34b17b37be/volumes" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.865576 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.865723 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7d4r9" event={"ID":"c3c35acb-afad-4124-a4e6-bf36f963ecbf","Type":"ContainerStarted","Data":"804f5be1c73bbc1f86b433cd5e2e07739c42ec7e3640e842b36b80fb767fe77d"} Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.871639 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" event={"ID":"340b0b43-e24e-430d-8694-0a2e6ad12e0f","Type":"ContainerStarted","Data":"f240ec48fd02d9313bd9809d66f4caa283995dfc136a4cfdcd76c8fa34932445"} Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.871690 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" event={"ID":"340b0b43-e24e-430d-8694-0a2e6ad12e0f","Type":"ContainerStarted","Data":"9af2e470758a87adf7b35eefcfe662b5080ff33b376fe0875ae98fa6a0c5995b"} Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.871704 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" event={"ID":"340b0b43-e24e-430d-8694-0a2e6ad12e0f","Type":"ContainerStarted","Data":"b362a840577183e5a943ab0484f4a0349642699e03d0efad84964ddee107061d"} Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.871715 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" event={"ID":"340b0b43-e24e-430d-8694-0a2e6ad12e0f","Type":"ContainerStarted","Data":"a8cdbdc91a0d612a948cfc46fc943de9e2b62650b81bd89212598e32492746db"} Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.871726 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" event={"ID":"340b0b43-e24e-430d-8694-0a2e6ad12e0f","Type":"ContainerStarted","Data":"43f7994581b7b0b72e6aebeaa7eb972c51455709c327553dc092d19eb93d6d8f"} Jan 21 10:04:42 crc kubenswrapper[5119]: I0121 10:04:42.871736 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" event={"ID":"340b0b43-e24e-430d-8694-0a2e6ad12e0f","Type":"ContainerStarted","Data":"a73bd718c0c80aa13441a4fbd2320b0fd1328bf89d2e97efbcfe7b52a23fc058"} Jan 21 10:04:44 crc kubenswrapper[5119]: I0121 10:04:44.884578 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" event={"ID":"340b0b43-e24e-430d-8694-0a2e6ad12e0f","Type":"ContainerStarted","Data":"d2cd89b3b5fa86982573dfd755c66add753e8fd1c67e7b80575aa78e65384cbc"} Jan 21 10:04:45 crc kubenswrapper[5119]: I0121 10:04:45.076758 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:04:45 crc kubenswrapper[5119]: I0121 10:04:45.080658 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:04:45 crc kubenswrapper[5119]: I0121 10:04:45.154666 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:04:45 crc kubenswrapper[5119]: I0121 10:04:45.159304 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:04:46 crc kubenswrapper[5119]: I0121 10:04:46.868889 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:04:47 crc kubenswrapper[5119]: I0121 10:04:47.919192 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" event={"ID":"340b0b43-e24e-430d-8694-0a2e6ad12e0f","Type":"ContainerStarted","Data":"235f29ce2f2b826bee6a4157ce4f7c2388e2d7c9f511819675fc2b7e885955a9"} Jan 21 10:04:47 crc kubenswrapper[5119]: I0121 10:04:47.919531 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:47 crc kubenswrapper[5119]: I0121 10:04:47.952044 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" podStartSLOduration=6.952030925 podStartE2EDuration="6.952030925s" podCreationTimestamp="2026-01-21 10:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:04:47.948915851 +0000 UTC m=+603.617007549" watchObservedRunningTime="2026-01-21 10:04:47.952030925 +0000 UTC m=+603.620122603" Jan 21 10:04:47 crc kubenswrapper[5119]: I0121 10:04:47.962455 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:48 crc kubenswrapper[5119]: I0121 10:04:48.925163 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:48 crc kubenswrapper[5119]: I0121 10:04:48.925455 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:04:48 crc kubenswrapper[5119]: I0121 10:04:48.954738 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:05:20 crc kubenswrapper[5119]: I0121 10:05:20.952982 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cjvt8" Jan 21 10:05:49 crc kubenswrapper[5119]: I0121 10:05:49.920478 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:05:49 crc kubenswrapper[5119]: I0121 10:05:49.921078 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:05:58 crc kubenswrapper[5119]: I0121 10:05:58.589931 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-695sn"] Jan 21 10:05:58 crc kubenswrapper[5119]: I0121 10:05:58.591889 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-695sn" podUID="0f41c580-660a-421c-8be8-7ec588566fe5" containerName="registry-server" containerID="cri-o://47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73" gracePeriod=30 Jan 21 10:05:58 crc kubenswrapper[5119]: I0121 10:05:58.956874 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.006201 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-catalog-content\") pod \"0f41c580-660a-421c-8be8-7ec588566fe5\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.006291 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-utilities\") pod \"0f41c580-660a-421c-8be8-7ec588566fe5\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.006315 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tzc9\" (UniqueName: \"kubernetes.io/projected/0f41c580-660a-421c-8be8-7ec588566fe5-kube-api-access-7tzc9\") pod \"0f41c580-660a-421c-8be8-7ec588566fe5\" (UID: \"0f41c580-660a-421c-8be8-7ec588566fe5\") " Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.008314 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-utilities" (OuterVolumeSpecName: "utilities") pod "0f41c580-660a-421c-8be8-7ec588566fe5" (UID: "0f41c580-660a-421c-8be8-7ec588566fe5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.014129 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f41c580-660a-421c-8be8-7ec588566fe5-kube-api-access-7tzc9" (OuterVolumeSpecName: "kube-api-access-7tzc9") pod "0f41c580-660a-421c-8be8-7ec588566fe5" (UID: "0f41c580-660a-421c-8be8-7ec588566fe5"). InnerVolumeSpecName "kube-api-access-7tzc9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.021707 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f41c580-660a-421c-8be8-7ec588566fe5" (UID: "0f41c580-660a-421c-8be8-7ec588566fe5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.107474 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.107530 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7tzc9\" (UniqueName: \"kubernetes.io/projected/0f41c580-660a-421c-8be8-7ec588566fe5-kube-api-access-7tzc9\") on node \"crc\" DevicePath \"\"" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.107555 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f41c580-660a-421c-8be8-7ec588566fe5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.331478 5119 generic.go:358] "Generic (PLEG): container finished" podID="0f41c580-660a-421c-8be8-7ec588566fe5" containerID="47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73" exitCode=0 Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.331576 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-695sn" event={"ID":"0f41c580-660a-421c-8be8-7ec588566fe5","Type":"ContainerDied","Data":"47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73"} Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.331622 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-695sn" event={"ID":"0f41c580-660a-421c-8be8-7ec588566fe5","Type":"ContainerDied","Data":"77ab43e9130ef01eb5fd4ee92bbe8322b3e4ca1de0ca72d8156a27127925c2b7"} Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.331638 5119 scope.go:117] "RemoveContainer" containerID="47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.331759 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-695sn" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.363697 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-695sn"] Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.366193 5119 scope.go:117] "RemoveContainer" containerID="7f3eb5abd8e3d690dd45e39210d46c9c6bdac0544d329dd30a422d2c058f5038" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.368139 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-695sn"] Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.392679 5119 scope.go:117] "RemoveContainer" containerID="8d0aa420b2cbe8e17af2cfd61d114d3fe08e2da45f26faed4b171414f67eeee7" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.411449 5119 scope.go:117] "RemoveContainer" containerID="47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73" Jan 21 10:05:59 crc kubenswrapper[5119]: E0121 10:05:59.411778 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73\": container with ID starting with 47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73 not found: ID does not exist" containerID="47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.411811 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73"} err="failed to get container status \"47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73\": rpc error: code = NotFound desc = could not find container \"47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73\": container with ID starting with 47c7ced48960ecde742e3e93598fd6ab83e1e2d16dbe871880e484e40572ed73 not found: ID does not exist" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.411831 5119 scope.go:117] "RemoveContainer" containerID="7f3eb5abd8e3d690dd45e39210d46c9c6bdac0544d329dd30a422d2c058f5038" Jan 21 10:05:59 crc kubenswrapper[5119]: E0121 10:05:59.412164 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f3eb5abd8e3d690dd45e39210d46c9c6bdac0544d329dd30a422d2c058f5038\": container with ID starting with 7f3eb5abd8e3d690dd45e39210d46c9c6bdac0544d329dd30a422d2c058f5038 not found: ID does not exist" containerID="7f3eb5abd8e3d690dd45e39210d46c9c6bdac0544d329dd30a422d2c058f5038" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.412197 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3eb5abd8e3d690dd45e39210d46c9c6bdac0544d329dd30a422d2c058f5038"} err="failed to get container status \"7f3eb5abd8e3d690dd45e39210d46c9c6bdac0544d329dd30a422d2c058f5038\": rpc error: code = NotFound desc = could not find container \"7f3eb5abd8e3d690dd45e39210d46c9c6bdac0544d329dd30a422d2c058f5038\": container with ID starting with 7f3eb5abd8e3d690dd45e39210d46c9c6bdac0544d329dd30a422d2c058f5038 not found: ID does not exist" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.412217 5119 scope.go:117] "RemoveContainer" containerID="8d0aa420b2cbe8e17af2cfd61d114d3fe08e2da45f26faed4b171414f67eeee7" Jan 21 10:05:59 crc kubenswrapper[5119]: E0121 10:05:59.412549 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0aa420b2cbe8e17af2cfd61d114d3fe08e2da45f26faed4b171414f67eeee7\": container with ID starting with 8d0aa420b2cbe8e17af2cfd61d114d3fe08e2da45f26faed4b171414f67eeee7 not found: ID does not exist" containerID="8d0aa420b2cbe8e17af2cfd61d114d3fe08e2da45f26faed4b171414f67eeee7" Jan 21 10:05:59 crc kubenswrapper[5119]: I0121 10:05:59.412571 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0aa420b2cbe8e17af2cfd61d114d3fe08e2da45f26faed4b171414f67eeee7"} err="failed to get container status \"8d0aa420b2cbe8e17af2cfd61d114d3fe08e2da45f26faed4b171414f67eeee7\": rpc error: code = NotFound desc = could not find container \"8d0aa420b2cbe8e17af2cfd61d114d3fe08e2da45f26faed4b171414f67eeee7\": container with ID starting with 8d0aa420b2cbe8e17af2cfd61d114d3fe08e2da45f26faed4b171414f67eeee7 not found: ID does not exist" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.129650 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483166-rsv6r"] Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.130469 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0f41c580-660a-421c-8be8-7ec588566fe5" containerName="extract-content" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.130483 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f41c580-660a-421c-8be8-7ec588566fe5" containerName="extract-content" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.130496 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0f41c580-660a-421c-8be8-7ec588566fe5" containerName="extract-utilities" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.130503 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f41c580-660a-421c-8be8-7ec588566fe5" containerName="extract-utilities" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.130522 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0f41c580-660a-421c-8be8-7ec588566fe5" containerName="registry-server" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.130531 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f41c580-660a-421c-8be8-7ec588566fe5" containerName="registry-server" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.130678 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="0f41c580-660a-421c-8be8-7ec588566fe5" containerName="registry-server" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.143655 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483166-rsv6r"] Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.143750 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483166-rsv6r" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.146042 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.146390 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.147112 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.221075 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqsqc\" (UniqueName: \"kubernetes.io/projected/c216007e-7167-4880-af30-706cc2a590f8-kube-api-access-vqsqc\") pod \"auto-csr-approver-29483166-rsv6r\" (UID: \"c216007e-7167-4880-af30-706cc2a590f8\") " pod="openshift-infra/auto-csr-approver-29483166-rsv6r" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.322234 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vqsqc\" (UniqueName: \"kubernetes.io/projected/c216007e-7167-4880-af30-706cc2a590f8-kube-api-access-vqsqc\") pod \"auto-csr-approver-29483166-rsv6r\" (UID: \"c216007e-7167-4880-af30-706cc2a590f8\") " pod="openshift-infra/auto-csr-approver-29483166-rsv6r" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.341359 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqsqc\" (UniqueName: \"kubernetes.io/projected/c216007e-7167-4880-af30-706cc2a590f8-kube-api-access-vqsqc\") pod \"auto-csr-approver-29483166-rsv6r\" (UID: \"c216007e-7167-4880-af30-706cc2a590f8\") " pod="openshift-infra/auto-csr-approver-29483166-rsv6r" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.459918 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483166-rsv6r" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.601657 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f41c580-660a-421c-8be8-7ec588566fe5" path="/var/lib/kubelet/pods/0f41c580-660a-421c-8be8-7ec588566fe5/volumes" Jan 21 10:06:00 crc kubenswrapper[5119]: I0121 10:06:00.681645 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483166-rsv6r"] Jan 21 10:06:00 crc kubenswrapper[5119]: W0121 10:06:00.688106 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc216007e_7167_4880_af30_706cc2a590f8.slice/crio-2a988db90b9faf576215b0a32317aa535ceefb32544f9ee2ae74dbd89793a8aa WatchSource:0}: Error finding container 2a988db90b9faf576215b0a32317aa535ceefb32544f9ee2ae74dbd89793a8aa: Status 404 returned error can't find the container with id 2a988db90b9faf576215b0a32317aa535ceefb32544f9ee2ae74dbd89793a8aa Jan 21 10:06:01 crc kubenswrapper[5119]: I0121 10:06:01.343899 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483166-rsv6r" event={"ID":"c216007e-7167-4880-af30-706cc2a590f8","Type":"ContainerStarted","Data":"2a988db90b9faf576215b0a32317aa535ceefb32544f9ee2ae74dbd89793a8aa"} Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.165803 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh"] Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.174022 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh"] Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.174236 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.176996 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.244477 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.244629 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.244687 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2v4m\" (UniqueName: \"kubernetes.io/projected/ffb326e8-8174-4779-9192-7321b0edcb79-kube-api-access-h2v4m\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.345369 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h2v4m\" (UniqueName: \"kubernetes.io/projected/ffb326e8-8174-4779-9192-7321b0edcb79-kube-api-access-h2v4m\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.345453 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.345533 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.346090 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.346417 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.369434 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2v4m\" (UniqueName: \"kubernetes.io/projected/ffb326e8-8174-4779-9192-7321b0edcb79-kube-api-access-h2v4m\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.499184 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:02 crc kubenswrapper[5119]: I0121 10:06:02.700042 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh"] Jan 21 10:06:02 crc kubenswrapper[5119]: W0121 10:06:02.707131 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffb326e8_8174_4779_9192_7321b0edcb79.slice/crio-0af21d9061171e670d0f9e01c7cd4e9da53b8b226402a7a73246d8125cfdf36b WatchSource:0}: Error finding container 0af21d9061171e670d0f9e01c7cd4e9da53b8b226402a7a73246d8125cfdf36b: Status 404 returned error can't find the container with id 0af21d9061171e670d0f9e01c7cd4e9da53b8b226402a7a73246d8125cfdf36b Jan 21 10:06:03 crc kubenswrapper[5119]: I0121 10:06:03.358505 5119 generic.go:358] "Generic (PLEG): container finished" podID="ffb326e8-8174-4779-9192-7321b0edcb79" containerID="79a3f46e03f72073e11532f7186ce3c93baa19c6f9527c38674da8c53be847fe" exitCode=0 Jan 21 10:06:03 crc kubenswrapper[5119]: I0121 10:06:03.358972 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" event={"ID":"ffb326e8-8174-4779-9192-7321b0edcb79","Type":"ContainerDied","Data":"79a3f46e03f72073e11532f7186ce3c93baa19c6f9527c38674da8c53be847fe"} Jan 21 10:06:03 crc kubenswrapper[5119]: I0121 10:06:03.358997 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" event={"ID":"ffb326e8-8174-4779-9192-7321b0edcb79","Type":"ContainerStarted","Data":"0af21d9061171e670d0f9e01c7cd4e9da53b8b226402a7a73246d8125cfdf36b"} Jan 21 10:06:04 crc kubenswrapper[5119]: I0121 10:06:04.370658 5119 generic.go:358] "Generic (PLEG): container finished" podID="c216007e-7167-4880-af30-706cc2a590f8" containerID="e6fda05d2c086a71d2b104f8fea16ab011a1923fc7c32ea88db544c8cb21a193" exitCode=0 Jan 21 10:06:04 crc kubenswrapper[5119]: I0121 10:06:04.370751 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483166-rsv6r" event={"ID":"c216007e-7167-4880-af30-706cc2a590f8","Type":"ContainerDied","Data":"e6fda05d2c086a71d2b104f8fea16ab011a1923fc7c32ea88db544c8cb21a193"} Jan 21 10:06:05 crc kubenswrapper[5119]: I0121 10:06:05.378071 5119 generic.go:358] "Generic (PLEG): container finished" podID="ffb326e8-8174-4779-9192-7321b0edcb79" containerID="296d2b83a261a153f639482e6432b47e0688782a052ba21569ee7a3287db169e" exitCode=0 Jan 21 10:06:05 crc kubenswrapper[5119]: I0121 10:06:05.378201 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" event={"ID":"ffb326e8-8174-4779-9192-7321b0edcb79","Type":"ContainerDied","Data":"296d2b83a261a153f639482e6432b47e0688782a052ba21569ee7a3287db169e"} Jan 21 10:06:05 crc kubenswrapper[5119]: I0121 10:06:05.600559 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483166-rsv6r" Jan 21 10:06:05 crc kubenswrapper[5119]: I0121 10:06:05.788863 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqsqc\" (UniqueName: \"kubernetes.io/projected/c216007e-7167-4880-af30-706cc2a590f8-kube-api-access-vqsqc\") pod \"c216007e-7167-4880-af30-706cc2a590f8\" (UID: \"c216007e-7167-4880-af30-706cc2a590f8\") " Jan 21 10:06:05 crc kubenswrapper[5119]: I0121 10:06:05.795591 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c216007e-7167-4880-af30-706cc2a590f8-kube-api-access-vqsqc" (OuterVolumeSpecName: "kube-api-access-vqsqc") pod "c216007e-7167-4880-af30-706cc2a590f8" (UID: "c216007e-7167-4880-af30-706cc2a590f8"). InnerVolumeSpecName "kube-api-access-vqsqc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:06:05 crc kubenswrapper[5119]: I0121 10:06:05.890397 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vqsqc\" (UniqueName: \"kubernetes.io/projected/c216007e-7167-4880-af30-706cc2a590f8-kube-api-access-vqsqc\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:06 crc kubenswrapper[5119]: I0121 10:06:06.388001 5119 generic.go:358] "Generic (PLEG): container finished" podID="ffb326e8-8174-4779-9192-7321b0edcb79" containerID="bc769fcde25ca74efb599672f3710a6bb34eaed9c94c3cf7c8bb375bf32f1822" exitCode=0 Jan 21 10:06:06 crc kubenswrapper[5119]: I0121 10:06:06.388147 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" event={"ID":"ffb326e8-8174-4779-9192-7321b0edcb79","Type":"ContainerDied","Data":"bc769fcde25ca74efb599672f3710a6bb34eaed9c94c3cf7c8bb375bf32f1822"} Jan 21 10:06:06 crc kubenswrapper[5119]: I0121 10:06:06.389259 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483166-rsv6r" Jan 21 10:06:06 crc kubenswrapper[5119]: I0121 10:06:06.389271 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483166-rsv6r" event={"ID":"c216007e-7167-4880-af30-706cc2a590f8","Type":"ContainerDied","Data":"2a988db90b9faf576215b0a32317aa535ceefb32544f9ee2ae74dbd89793a8aa"} Jan 21 10:06:06 crc kubenswrapper[5119]: I0121 10:06:06.389292 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a988db90b9faf576215b0a32317aa535ceefb32544f9ee2ae74dbd89793a8aa" Jan 21 10:06:06 crc kubenswrapper[5119]: I0121 10:06:06.656666 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483160-fn4lx"] Jan 21 10:06:06 crc kubenswrapper[5119]: I0121 10:06:06.659820 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483160-fn4lx"] Jan 21 10:06:07 crc kubenswrapper[5119]: I0121 10:06:07.626904 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:07 crc kubenswrapper[5119]: I0121 10:06:07.814166 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-bundle\") pod \"ffb326e8-8174-4779-9192-7321b0edcb79\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " Jan 21 10:06:07 crc kubenswrapper[5119]: I0121 10:06:07.814495 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-util\") pod \"ffb326e8-8174-4779-9192-7321b0edcb79\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " Jan 21 10:06:07 crc kubenswrapper[5119]: I0121 10:06:07.814538 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2v4m\" (UniqueName: \"kubernetes.io/projected/ffb326e8-8174-4779-9192-7321b0edcb79-kube-api-access-h2v4m\") pod \"ffb326e8-8174-4779-9192-7321b0edcb79\" (UID: \"ffb326e8-8174-4779-9192-7321b0edcb79\") " Jan 21 10:06:07 crc kubenswrapper[5119]: I0121 10:06:07.816958 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-bundle" (OuterVolumeSpecName: "bundle") pod "ffb326e8-8174-4779-9192-7321b0edcb79" (UID: "ffb326e8-8174-4779-9192-7321b0edcb79"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:06:07 crc kubenswrapper[5119]: I0121 10:06:07.819733 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb326e8-8174-4779-9192-7321b0edcb79-kube-api-access-h2v4m" (OuterVolumeSpecName: "kube-api-access-h2v4m") pod "ffb326e8-8174-4779-9192-7321b0edcb79" (UID: "ffb326e8-8174-4779-9192-7321b0edcb79"). InnerVolumeSpecName "kube-api-access-h2v4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:06:07 crc kubenswrapper[5119]: I0121 10:06:07.826558 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-util" (OuterVolumeSpecName: "util") pod "ffb326e8-8174-4779-9192-7321b0edcb79" (UID: "ffb326e8-8174-4779-9192-7321b0edcb79"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:06:07 crc kubenswrapper[5119]: I0121 10:06:07.915967 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:07 crc kubenswrapper[5119]: I0121 10:06:07.916003 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffb326e8-8174-4779-9192-7321b0edcb79-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:07 crc kubenswrapper[5119]: I0121 10:06:07.916014 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2v4m\" (UniqueName: \"kubernetes.io/projected/ffb326e8-8174-4779-9192-7321b0edcb79-kube-api-access-h2v4m\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.404413 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.404411 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh" event={"ID":"ffb326e8-8174-4779-9192-7321b0edcb79","Type":"ContainerDied","Data":"0af21d9061171e670d0f9e01c7cd4e9da53b8b226402a7a73246d8125cfdf36b"} Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.404569 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af21d9061171e670d0f9e01c7cd4e9da53b8b226402a7a73246d8125cfdf36b" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.602643 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8718e06-ef68-4354-92ff-67ea0a52da09" path="/var/lib/kubelet/pods/e8718e06-ef68-4354-92ff-67ea0a52da09/volumes" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.766869 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb"] Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.773408 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c216007e-7167-4880-af30-706cc2a590f8" containerName="oc" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.773449 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c216007e-7167-4880-af30-706cc2a590f8" containerName="oc" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.773482 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ffb326e8-8174-4779-9192-7321b0edcb79" containerName="extract" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.773491 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb326e8-8174-4779-9192-7321b0edcb79" containerName="extract" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.773506 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ffb326e8-8174-4779-9192-7321b0edcb79" containerName="pull" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.773513 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb326e8-8174-4779-9192-7321b0edcb79" containerName="pull" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.773529 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ffb326e8-8174-4779-9192-7321b0edcb79" containerName="util" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.773549 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb326e8-8174-4779-9192-7321b0edcb79" containerName="util" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.773690 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="c216007e-7167-4880-af30-706cc2a590f8" containerName="oc" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.773705 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="ffb326e8-8174-4779-9192-7321b0edcb79" containerName="extract" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.788300 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb"] Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.788455 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.792809 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.828950 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdk4k\" (UniqueName: \"kubernetes.io/projected/8e328aac-fdc5-4809-9e67-d4e3cbe46404-kube-api-access-hdk4k\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.829005 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.829036 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.930749 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.930816 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.930993 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hdk4k\" (UniqueName: \"kubernetes.io/projected/8e328aac-fdc5-4809-9e67-d4e3cbe46404-kube-api-access-hdk4k\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.931297 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.931688 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:08 crc kubenswrapper[5119]: I0121 10:06:08.951084 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdk4k\" (UniqueName: \"kubernetes.io/projected/8e328aac-fdc5-4809-9e67-d4e3cbe46404-kube-api-access-hdk4k\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.110452 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.322743 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb"] Jan 21 10:06:09 crc kubenswrapper[5119]: W0121 10:06:09.325328 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e328aac_fdc5_4809_9e67_d4e3cbe46404.slice/crio-02cbf017bf1765db7505fe2de986bf22992cf52e7395db77a057676050839d38 WatchSource:0}: Error finding container 02cbf017bf1765db7505fe2de986bf22992cf52e7395db77a057676050839d38: Status 404 returned error can't find the container with id 02cbf017bf1765db7505fe2de986bf22992cf52e7395db77a057676050839d38 Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.415264 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" event={"ID":"8e328aac-fdc5-4809-9e67-d4e3cbe46404","Type":"ContainerStarted","Data":"02cbf017bf1765db7505fe2de986bf22992cf52e7395db77a057676050839d38"} Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.571041 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx"] Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.578228 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.584495 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx"] Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.639597 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.639918 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxgnv\" (UniqueName: \"kubernetes.io/projected/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-kube-api-access-bxgnv\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.639967 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.741762 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.741890 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bxgnv\" (UniqueName: \"kubernetes.io/projected/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-kube-api-access-bxgnv\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.742048 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.742835 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.745670 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.763806 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxgnv\" (UniqueName: \"kubernetes.io/projected/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-kube-api-access-bxgnv\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:09 crc kubenswrapper[5119]: I0121 10:06:09.936545 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:10 crc kubenswrapper[5119]: I0121 10:06:10.394766 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx"] Jan 21 10:06:10 crc kubenswrapper[5119]: I0121 10:06:10.449283 5119 generic.go:358] "Generic (PLEG): container finished" podID="8e328aac-fdc5-4809-9e67-d4e3cbe46404" containerID="0ea20b3d570140972e7b2c35f51e6a15bd6a1e72847eee10a7f17e870b872759" exitCode=0 Jan 21 10:06:10 crc kubenswrapper[5119]: I0121 10:06:10.449669 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" event={"ID":"8e328aac-fdc5-4809-9e67-d4e3cbe46404","Type":"ContainerDied","Data":"0ea20b3d570140972e7b2c35f51e6a15bd6a1e72847eee10a7f17e870b872759"} Jan 21 10:06:11 crc kubenswrapper[5119]: I0121 10:06:11.456720 5119 generic.go:358] "Generic (PLEG): container finished" podID="73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" containerID="a76ad7d7e2ac69ffe5f8f386224f4b6c5eb72dacac8a626acb2a268a0d8cd3e2" exitCode=0 Jan 21 10:06:11 crc kubenswrapper[5119]: I0121 10:06:11.456840 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" event={"ID":"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4","Type":"ContainerDied","Data":"a76ad7d7e2ac69ffe5f8f386224f4b6c5eb72dacac8a626acb2a268a0d8cd3e2"} Jan 21 10:06:11 crc kubenswrapper[5119]: I0121 10:06:11.457506 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" event={"ID":"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4","Type":"ContainerStarted","Data":"bef8df08c1094cba8742985e48a2d2d76815d0b74ebf4601616fb9e0a58ee2a6"} Jan 21 10:06:12 crc kubenswrapper[5119]: I0121 10:06:12.465990 5119 generic.go:358] "Generic (PLEG): container finished" podID="8e328aac-fdc5-4809-9e67-d4e3cbe46404" containerID="63c308406bd50dc0848ff97e43e72432b62acb30f83567152cfae148eb93af1f" exitCode=0 Jan 21 10:06:12 crc kubenswrapper[5119]: I0121 10:06:12.466109 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" event={"ID":"8e328aac-fdc5-4809-9e67-d4e3cbe46404","Type":"ContainerDied","Data":"63c308406bd50dc0848ff97e43e72432b62acb30f83567152cfae148eb93af1f"} Jan 21 10:06:13 crc kubenswrapper[5119]: I0121 10:06:13.473654 5119 generic.go:358] "Generic (PLEG): container finished" podID="73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" containerID="9b2dbbe8a501168f7ef9888a60dfdc248507613b3ec6dfc4ca0c64db87be6df1" exitCode=0 Jan 21 10:06:13 crc kubenswrapper[5119]: I0121 10:06:13.473838 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" event={"ID":"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4","Type":"ContainerDied","Data":"9b2dbbe8a501168f7ef9888a60dfdc248507613b3ec6dfc4ca0c64db87be6df1"} Jan 21 10:06:13 crc kubenswrapper[5119]: I0121 10:06:13.477078 5119 generic.go:358] "Generic (PLEG): container finished" podID="8e328aac-fdc5-4809-9e67-d4e3cbe46404" containerID="b2797cd9ec4b94c42dff0a6b36e89e9fce90e815600fed5843120b4e363a42c5" exitCode=0 Jan 21 10:06:13 crc kubenswrapper[5119]: I0121 10:06:13.477279 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" event={"ID":"8e328aac-fdc5-4809-9e67-d4e3cbe46404","Type":"ContainerDied","Data":"b2797cd9ec4b94c42dff0a6b36e89e9fce90e815600fed5843120b4e363a42c5"} Jan 21 10:06:14 crc kubenswrapper[5119]: I0121 10:06:14.483825 5119 generic.go:358] "Generic (PLEG): container finished" podID="73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" containerID="dc7ea0703698880e7db0706ae860c89647eeb51027f3238a63e9eba13d44087a" exitCode=0 Jan 21 10:06:14 crc kubenswrapper[5119]: I0121 10:06:14.484030 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" event={"ID":"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4","Type":"ContainerDied","Data":"dc7ea0703698880e7db0706ae860c89647eeb51027f3238a63e9eba13d44087a"} Jan 21 10:06:14 crc kubenswrapper[5119]: I0121 10:06:14.936493 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.020705 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdk4k\" (UniqueName: \"kubernetes.io/projected/8e328aac-fdc5-4809-9e67-d4e3cbe46404-kube-api-access-hdk4k\") pod \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.021065 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-bundle\") pod \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.021161 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-util\") pod \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\" (UID: \"8e328aac-fdc5-4809-9e67-d4e3cbe46404\") " Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.024165 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-bundle" (OuterVolumeSpecName: "bundle") pod "8e328aac-fdc5-4809-9e67-d4e3cbe46404" (UID: "8e328aac-fdc5-4809-9e67-d4e3cbe46404"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.032919 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e328aac-fdc5-4809-9e67-d4e3cbe46404-kube-api-access-hdk4k" (OuterVolumeSpecName: "kube-api-access-hdk4k") pod "8e328aac-fdc5-4809-9e67-d4e3cbe46404" (UID: "8e328aac-fdc5-4809-9e67-d4e3cbe46404"). InnerVolumeSpecName "kube-api-access-hdk4k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.037030 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-util" (OuterVolumeSpecName: "util") pod "8e328aac-fdc5-4809-9e67-d4e3cbe46404" (UID: "8e328aac-fdc5-4809-9e67-d4e3cbe46404"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.122326 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hdk4k\" (UniqueName: \"kubernetes.io/projected/8e328aac-fdc5-4809-9e67-d4e3cbe46404-kube-api-access-hdk4k\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.122359 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.122367 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e328aac-fdc5-4809-9e67-d4e3cbe46404-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.491051 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.491048 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb" event={"ID":"8e328aac-fdc5-4809-9e67-d4e3cbe46404","Type":"ContainerDied","Data":"02cbf017bf1765db7505fe2de986bf22992cf52e7395db77a057676050839d38"} Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.491207 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02cbf017bf1765db7505fe2de986bf22992cf52e7395db77a057676050839d38" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.754532 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.829665 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxgnv\" (UniqueName: \"kubernetes.io/projected/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-kube-api-access-bxgnv\") pod \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.829786 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-util\") pod \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.829819 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-bundle\") pod \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\" (UID: \"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4\") " Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.830589 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-bundle" (OuterVolumeSpecName: "bundle") pod "73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" (UID: "73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.832826 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-kube-api-access-bxgnv" (OuterVolumeSpecName: "kube-api-access-bxgnv") pod "73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" (UID: "73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4"). InnerVolumeSpecName "kube-api-access-bxgnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.850924 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-util" (OuterVolumeSpecName: "util") pod "73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" (UID: "73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.930933 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bxgnv\" (UniqueName: \"kubernetes.io/projected/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-kube-api-access-bxgnv\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.930963 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:15 crc kubenswrapper[5119]: I0121 10:06:15.930972 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425014 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-tbcj4"] Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425717 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e328aac-fdc5-4809-9e67-d4e3cbe46404" containerName="extract" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425739 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e328aac-fdc5-4809-9e67-d4e3cbe46404" containerName="extract" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425752 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" containerName="pull" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425759 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" containerName="pull" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425769 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" containerName="extract" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425776 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" containerName="extract" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425799 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e328aac-fdc5-4809-9e67-d4e3cbe46404" containerName="util" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425806 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e328aac-fdc5-4809-9e67-d4e3cbe46404" containerName="util" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425822 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" containerName="util" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425829 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" containerName="util" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425840 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e328aac-fdc5-4809-9e67-d4e3cbe46404" containerName="pull" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425846 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e328aac-fdc5-4809-9e67-d4e3cbe46404" containerName="pull" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425941 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4" containerName="extract" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.425957 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8e328aac-fdc5-4809-9e67-d4e3cbe46404" containerName="extract" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.628328 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" event={"ID":"73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4","Type":"ContainerDied","Data":"bef8df08c1094cba8742985e48a2d2d76815d0b74ebf4601616fb9e0a58ee2a6"} Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.628370 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bef8df08c1094cba8742985e48a2d2d76815d0b74ebf4601616fb9e0a58ee2a6" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.628379 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-tbcj4" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.631049 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.634483 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.634979 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-tbcj4"] Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.635016 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs"] Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.636982 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-r62cs\"" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.637299 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.640349 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs"] Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.640369 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs"] Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.640475 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.643180 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.643259 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-9gp75\"" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.644714 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs"] Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.644810 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.739287 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ee8142e1-cc8a-44c9-b122-940344748596-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-n75gs\" (UID: \"ee8142e1-cc8a-44c9-b122-940344748596\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.739559 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4frs9\" (UniqueName: \"kubernetes.io/projected/7c6b806a-236c-47d9-bd21-32fcaee5b1ec-kube-api-access-4frs9\") pod \"obo-prometheus-operator-9bc85b4bf-tbcj4\" (UID: \"7c6b806a-236c-47d9-bd21-32fcaee5b1ec\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-tbcj4" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.739698 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ee8142e1-cc8a-44c9-b122-940344748596-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-n75gs\" (UID: \"ee8142e1-cc8a-44c9-b122-940344748596\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.739785 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14cb3c42-9784-4142-a581-86863911936b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-7rzqs\" (UID: \"14cb3c42-9784-4142-a581-86863911936b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.739903 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14cb3c42-9784-4142-a581-86863911936b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-7rzqs\" (UID: \"14cb3c42-9784-4142-a581-86863911936b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.798562 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-gdq46"] Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.804913 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-gdq46" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.808854 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.809022 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-8k7k7\"" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.819045 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-gdq46"] Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.840648 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ee8142e1-cc8a-44c9-b122-940344748596-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-n75gs\" (UID: \"ee8142e1-cc8a-44c9-b122-940344748596\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.840880 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4frs9\" (UniqueName: \"kubernetes.io/projected/7c6b806a-236c-47d9-bd21-32fcaee5b1ec-kube-api-access-4frs9\") pod \"obo-prometheus-operator-9bc85b4bf-tbcj4\" (UID: \"7c6b806a-236c-47d9-bd21-32fcaee5b1ec\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-tbcj4" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.840999 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ee8142e1-cc8a-44c9-b122-940344748596-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-n75gs\" (UID: \"ee8142e1-cc8a-44c9-b122-940344748596\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.841084 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14cb3c42-9784-4142-a581-86863911936b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-7rzqs\" (UID: \"14cb3c42-9784-4142-a581-86863911936b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.841169 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14cb3c42-9784-4142-a581-86863911936b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-7rzqs\" (UID: \"14cb3c42-9784-4142-a581-86863911936b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.844934 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ee8142e1-cc8a-44c9-b122-940344748596-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-n75gs\" (UID: \"ee8142e1-cc8a-44c9-b122-940344748596\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.846102 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ee8142e1-cc8a-44c9-b122-940344748596-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-n75gs\" (UID: \"ee8142e1-cc8a-44c9-b122-940344748596\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.848144 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14cb3c42-9784-4142-a581-86863911936b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-7rzqs\" (UID: \"14cb3c42-9784-4142-a581-86863911936b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.858410 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4frs9\" (UniqueName: \"kubernetes.io/projected/7c6b806a-236c-47d9-bd21-32fcaee5b1ec-kube-api-access-4frs9\") pod \"obo-prometheus-operator-9bc85b4bf-tbcj4\" (UID: \"7c6b806a-236c-47d9-bd21-32fcaee5b1ec\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-tbcj4" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.859710 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14cb3c42-9784-4142-a581-86863911936b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69b968d655-7rzqs\" (UID: \"14cb3c42-9784-4142-a581-86863911936b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.942426 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws4jw\" (UniqueName: \"kubernetes.io/projected/feb935e3-7103-46f7-ab84-2ba969146f6f-kube-api-access-ws4jw\") pod \"observability-operator-85c68dddb-gdq46\" (UID: \"feb935e3-7103-46f7-ab84-2ba969146f6f\") " pod="openshift-operators/observability-operator-85c68dddb-gdq46" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.942489 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/feb935e3-7103-46f7-ab84-2ba969146f6f-observability-operator-tls\") pod \"observability-operator-85c68dddb-gdq46\" (UID: \"feb935e3-7103-46f7-ab84-2ba969146f6f\") " pod="openshift-operators/observability-operator-85c68dddb-gdq46" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.946037 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-tbcj4" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.976003 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs" Jan 21 10:06:16 crc kubenswrapper[5119]: I0121 10:06:16.984673 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.001862 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-6kpd9"] Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.013259 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.015102 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-2s4k7\"" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.017075 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-6kpd9"] Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.043880 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ws4jw\" (UniqueName: \"kubernetes.io/projected/feb935e3-7103-46f7-ab84-2ba969146f6f-kube-api-access-ws4jw\") pod \"observability-operator-85c68dddb-gdq46\" (UID: \"feb935e3-7103-46f7-ab84-2ba969146f6f\") " pod="openshift-operators/observability-operator-85c68dddb-gdq46" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.043935 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/feb935e3-7103-46f7-ab84-2ba969146f6f-observability-operator-tls\") pod \"observability-operator-85c68dddb-gdq46\" (UID: \"feb935e3-7103-46f7-ab84-2ba969146f6f\") " pod="openshift-operators/observability-operator-85c68dddb-gdq46" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.053525 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/feb935e3-7103-46f7-ab84-2ba969146f6f-observability-operator-tls\") pod \"observability-operator-85c68dddb-gdq46\" (UID: \"feb935e3-7103-46f7-ab84-2ba969146f6f\") " pod="openshift-operators/observability-operator-85c68dddb-gdq46" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.072484 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws4jw\" (UniqueName: \"kubernetes.io/projected/feb935e3-7103-46f7-ab84-2ba969146f6f-kube-api-access-ws4jw\") pod \"observability-operator-85c68dddb-gdq46\" (UID: \"feb935e3-7103-46f7-ab84-2ba969146f6f\") " pod="openshift-operators/observability-operator-85c68dddb-gdq46" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.120875 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-gdq46" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.146183 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0115c16-2bbf-4c9a-8731-b4f799070b87-openshift-service-ca\") pod \"perses-operator-669c9f96b5-6kpd9\" (UID: \"e0115c16-2bbf-4c9a-8731-b4f799070b87\") " pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.146231 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4np8\" (UniqueName: \"kubernetes.io/projected/e0115c16-2bbf-4c9a-8731-b4f799070b87-kube-api-access-z4np8\") pod \"perses-operator-669c9f96b5-6kpd9\" (UID: \"e0115c16-2bbf-4c9a-8731-b4f799070b87\") " pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.254450 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0115c16-2bbf-4c9a-8731-b4f799070b87-openshift-service-ca\") pod \"perses-operator-669c9f96b5-6kpd9\" (UID: \"e0115c16-2bbf-4c9a-8731-b4f799070b87\") " pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.254496 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4np8\" (UniqueName: \"kubernetes.io/projected/e0115c16-2bbf-4c9a-8731-b4f799070b87-kube-api-access-z4np8\") pod \"perses-operator-669c9f96b5-6kpd9\" (UID: \"e0115c16-2bbf-4c9a-8731-b4f799070b87\") " pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.255471 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0115c16-2bbf-4c9a-8731-b4f799070b87-openshift-service-ca\") pod \"perses-operator-669c9f96b5-6kpd9\" (UID: \"e0115c16-2bbf-4c9a-8731-b4f799070b87\") " pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.278086 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4np8\" (UniqueName: \"kubernetes.io/projected/e0115c16-2bbf-4c9a-8731-b4f799070b87-kube-api-access-z4np8\") pod \"perses-operator-669c9f96b5-6kpd9\" (UID: \"e0115c16-2bbf-4c9a-8731-b4f799070b87\") " pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.379458 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.472209 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-gdq46"] Jan 21 10:06:17 crc kubenswrapper[5119]: W0121 10:06:17.487381 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeb935e3_7103_46f7_ab84_2ba969146f6f.slice/crio-0a6e91a9547bc26c968db442f71da2922bb02188ac12dd3632145c4dfe059c01 WatchSource:0}: Error finding container 0a6e91a9547bc26c968db442f71da2922bb02188ac12dd3632145c4dfe059c01: Status 404 returned error can't find the container with id 0a6e91a9547bc26c968db442f71da2922bb02188ac12dd3632145c4dfe059c01 Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.508027 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-gdq46" event={"ID":"feb935e3-7103-46f7-ab84-2ba969146f6f","Type":"ContainerStarted","Data":"0a6e91a9547bc26c968db442f71da2922bb02188ac12dd3632145c4dfe059c01"} Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.538523 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-tbcj4"] Jan 21 10:06:17 crc kubenswrapper[5119]: W0121 10:06:17.555838 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c6b806a_236c_47d9_bd21_32fcaee5b1ec.slice/crio-1660e7ad998dffe476e7cb9cef97c03afeccb210151875019c5beda7fb22a4f6 WatchSource:0}: Error finding container 1660e7ad998dffe476e7cb9cef97c03afeccb210151875019c5beda7fb22a4f6: Status 404 returned error can't find the container with id 1660e7ad998dffe476e7cb9cef97c03afeccb210151875019c5beda7fb22a4f6 Jan 21 10:06:17 crc kubenswrapper[5119]: W0121 10:06:17.590752 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14cb3c42_9784_4142_a581_86863911936b.slice/crio-031cbad174614bf47e87b5dbf2b8552c9b691f3521212001d1497e13533e65fb WatchSource:0}: Error finding container 031cbad174614bf47e87b5dbf2b8552c9b691f3521212001d1497e13533e65fb: Status 404 returned error can't find the container with id 031cbad174614bf47e87b5dbf2b8552c9b691f3521212001d1497e13533e65fb Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.594467 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs"] Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.605295 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs"] Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.627487 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-6kpd9"] Jan 21 10:06:17 crc kubenswrapper[5119]: W0121 10:06:17.633867 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0115c16_2bbf_4c9a_8731_b4f799070b87.slice/crio-56f9918372e77f9e94407856adfa806ff7a45b5f08ecf645ff04c5d13b40a377 WatchSource:0}: Error finding container 56f9918372e77f9e94407856adfa806ff7a45b5f08ecf645ff04c5d13b40a377: Status 404 returned error can't find the container with id 56f9918372e77f9e94407856adfa806ff7a45b5f08ecf645ff04c5d13b40a377 Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.762239 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7"] Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.769295 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.773778 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.787511 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7"] Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.863888 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.863999 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz946\" (UniqueName: \"kubernetes.io/projected/96725402-3741-49bb-a915-6e04fde9ee9d-kube-api-access-hz946\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.864047 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.964845 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hz946\" (UniqueName: \"kubernetes.io/projected/96725402-3741-49bb-a915-6e04fde9ee9d-kube-api-access-hz946\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.964907 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.964939 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.965979 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.966201 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:17 crc kubenswrapper[5119]: I0121 10:06:17.991405 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz946\" (UniqueName: \"kubernetes.io/projected/96725402-3741-49bb-a915-6e04fde9ee9d-kube-api-access-hz946\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:18 crc kubenswrapper[5119]: I0121 10:06:18.084652 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:18 crc kubenswrapper[5119]: I0121 10:06:18.306276 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7"] Jan 21 10:06:18 crc kubenswrapper[5119]: I0121 10:06:18.527343 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs" event={"ID":"ee8142e1-cc8a-44c9-b122-940344748596","Type":"ContainerStarted","Data":"c99e03d90e2190642782e2efeb2049acaaaf57a468a6ec1d8eae13894f90a6e9"} Jan 21 10:06:18 crc kubenswrapper[5119]: I0121 10:06:18.532541 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" event={"ID":"e0115c16-2bbf-4c9a-8731-b4f799070b87","Type":"ContainerStarted","Data":"56f9918372e77f9e94407856adfa806ff7a45b5f08ecf645ff04c5d13b40a377"} Jan 21 10:06:18 crc kubenswrapper[5119]: I0121 10:06:18.533701 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" event={"ID":"96725402-3741-49bb-a915-6e04fde9ee9d","Type":"ContainerStarted","Data":"e25f6cb257212ffaee6fd7d4359dbf83b245daa8f77f62d99799cc9a96f96e90"} Jan 21 10:06:18 crc kubenswrapper[5119]: I0121 10:06:18.533723 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" event={"ID":"96725402-3741-49bb-a915-6e04fde9ee9d","Type":"ContainerStarted","Data":"5e4b7879aab90b9e00da8962ab0f80e4613065b836e48fafb1c2f76e4276dd82"} Jan 21 10:06:18 crc kubenswrapper[5119]: I0121 10:06:18.544184 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-tbcj4" event={"ID":"7c6b806a-236c-47d9-bd21-32fcaee5b1ec","Type":"ContainerStarted","Data":"1660e7ad998dffe476e7cb9cef97c03afeccb210151875019c5beda7fb22a4f6"} Jan 21 10:06:18 crc kubenswrapper[5119]: I0121 10:06:18.545873 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs" event={"ID":"14cb3c42-9784-4142-a581-86863911936b","Type":"ContainerStarted","Data":"031cbad174614bf47e87b5dbf2b8552c9b691f3521212001d1497e13533e65fb"} Jan 21 10:06:19 crc kubenswrapper[5119]: I0121 10:06:19.562916 5119 generic.go:358] "Generic (PLEG): container finished" podID="96725402-3741-49bb-a915-6e04fde9ee9d" containerID="e25f6cb257212ffaee6fd7d4359dbf83b245daa8f77f62d99799cc9a96f96e90" exitCode=0 Jan 21 10:06:19 crc kubenswrapper[5119]: I0121 10:06:19.564496 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" event={"ID":"96725402-3741-49bb-a915-6e04fde9ee9d","Type":"ContainerDied","Data":"e25f6cb257212ffaee6fd7d4359dbf83b245daa8f77f62d99799cc9a96f96e90"} Jan 21 10:06:19 crc kubenswrapper[5119]: I0121 10:06:19.918801 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:06:19 crc kubenswrapper[5119]: I0121 10:06:19.918853 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.609997 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7978d4ccbd-fg5zr"] Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.617547 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.619530 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7978d4ccbd-fg5zr"] Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.621811 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-25vxk\"" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.622055 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.622258 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.622427 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.741637 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f-apiservice-cert\") pod \"elastic-operator-7978d4ccbd-fg5zr\" (UID: \"4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f\") " pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.741729 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f-webhook-cert\") pod \"elastic-operator-7978d4ccbd-fg5zr\" (UID: \"4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f\") " pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.742750 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfgs8\" (UniqueName: \"kubernetes.io/projected/4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f-kube-api-access-zfgs8\") pod \"elastic-operator-7978d4ccbd-fg5zr\" (UID: \"4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f\") " pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.844377 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f-apiservice-cert\") pod \"elastic-operator-7978d4ccbd-fg5zr\" (UID: \"4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f\") " pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.844424 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f-webhook-cert\") pod \"elastic-operator-7978d4ccbd-fg5zr\" (UID: \"4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f\") " pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.844463 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zfgs8\" (UniqueName: \"kubernetes.io/projected/4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f-kube-api-access-zfgs8\") pod \"elastic-operator-7978d4ccbd-fg5zr\" (UID: \"4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f\") " pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.851080 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f-webhook-cert\") pod \"elastic-operator-7978d4ccbd-fg5zr\" (UID: \"4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f\") " pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.855142 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f-apiservice-cert\") pod \"elastic-operator-7978d4ccbd-fg5zr\" (UID: \"4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f\") " pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.862261 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfgs8\" (UniqueName: \"kubernetes.io/projected/4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f-kube-api-access-zfgs8\") pod \"elastic-operator-7978d4ccbd-fg5zr\" (UID: \"4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f\") " pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" Jan 21 10:06:22 crc kubenswrapper[5119]: I0121 10:06:22.958652 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" Jan 21 10:06:25 crc kubenswrapper[5119]: I0121 10:06:25.321972 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-62b8q"] Jan 21 10:06:25 crc kubenswrapper[5119]: I0121 10:06:25.333403 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-62b8q"] Jan 21 10:06:25 crc kubenswrapper[5119]: I0121 10:06:25.333622 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-62b8q" Jan 21 10:06:25 crc kubenswrapper[5119]: I0121 10:06:25.335766 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-tzth6\"" Jan 21 10:06:25 crc kubenswrapper[5119]: I0121 10:06:25.481623 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw9m6\" (UniqueName: \"kubernetes.io/projected/28455f19-bea4-4979-bc01-e9ca6f14c7e6-kube-api-access-vw9m6\") pod \"interconnect-operator-78b9bd8798-62b8q\" (UID: \"28455f19-bea4-4979-bc01-e9ca6f14c7e6\") " pod="service-telemetry/interconnect-operator-78b9bd8798-62b8q" Jan 21 10:06:25 crc kubenswrapper[5119]: I0121 10:06:25.583544 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vw9m6\" (UniqueName: \"kubernetes.io/projected/28455f19-bea4-4979-bc01-e9ca6f14c7e6-kube-api-access-vw9m6\") pod \"interconnect-operator-78b9bd8798-62b8q\" (UID: \"28455f19-bea4-4979-bc01-e9ca6f14c7e6\") " pod="service-telemetry/interconnect-operator-78b9bd8798-62b8q" Jan 21 10:06:25 crc kubenswrapper[5119]: I0121 10:06:25.618725 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw9m6\" (UniqueName: \"kubernetes.io/projected/28455f19-bea4-4979-bc01-e9ca6f14c7e6-kube-api-access-vw9m6\") pod \"interconnect-operator-78b9bd8798-62b8q\" (UID: \"28455f19-bea4-4979-bc01-e9ca6f14c7e6\") " pod="service-telemetry/interconnect-operator-78b9bd8798-62b8q" Jan 21 10:06:25 crc kubenswrapper[5119]: I0121 10:06:25.668020 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-62b8q" Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.276618 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7978d4ccbd-fg5zr"] Jan 21 10:06:32 crc kubenswrapper[5119]: W0121 10:06:32.293846 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fcd5f26_6c5c_40b8_9c33_ff3679b1c09f.slice/crio-a803c5c15185a7e1d4764b68173f08ccc29fb3f4aad7963c713e9cf1e00ae6fc WatchSource:0}: Error finding container a803c5c15185a7e1d4764b68173f08ccc29fb3f4aad7963c713e9cf1e00ae6fc: Status 404 returned error can't find the container with id a803c5c15185a7e1d4764b68173f08ccc29fb3f4aad7963c713e9cf1e00ae6fc Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.333869 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-62b8q"] Jan 21 10:06:32 crc kubenswrapper[5119]: W0121 10:06:32.345574 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28455f19_bea4_4979_bc01_e9ca6f14c7e6.slice/crio-f92bfd1c00d19b563669755c3d40fdae356bbea2bd48001fac25f16e3ec6367e WatchSource:0}: Error finding container f92bfd1c00d19b563669755c3d40fdae356bbea2bd48001fac25f16e3ec6367e: Status 404 returned error can't find the container with id f92bfd1c00d19b563669755c3d40fdae356bbea2bd48001fac25f16e3ec6367e Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.691239 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-gdq46" event={"ID":"feb935e3-7103-46f7-ab84-2ba969146f6f","Type":"ContainerStarted","Data":"03fe080539c17ed02f736afea117cced8b9b9e46ec55d0f0885afe3349e8bd3f"} Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.691596 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-gdq46" Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.700871 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-tbcj4" event={"ID":"7c6b806a-236c-47d9-bd21-32fcaee5b1ec","Type":"ContainerStarted","Data":"42776e40bed18ad0d93a31ec7fe2291fcd52bbddf1882393409d459bc1ccd9c0"} Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.703436 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs" event={"ID":"14cb3c42-9784-4142-a581-86863911936b","Type":"ContainerStarted","Data":"d1dded9a2c70e121329a68dd07feb7f09031aebbac981b5bfb140a75c1e5d1e3"} Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.704875 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-62b8q" event={"ID":"28455f19-bea4-4979-bc01-e9ca6f14c7e6","Type":"ContainerStarted","Data":"f92bfd1c00d19b563669755c3d40fdae356bbea2bd48001fac25f16e3ec6367e"} Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.706468 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" event={"ID":"4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f","Type":"ContainerStarted","Data":"a803c5c15185a7e1d4764b68173f08ccc29fb3f4aad7963c713e9cf1e00ae6fc"} Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.708002 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs" event={"ID":"ee8142e1-cc8a-44c9-b122-940344748596","Type":"ContainerStarted","Data":"9dc7b471b68f130f077cf7a756623fa895d62d40055adc38580f9dba09fd17f5"} Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.709399 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" event={"ID":"e0115c16-2bbf-4c9a-8731-b4f799070b87","Type":"ContainerStarted","Data":"8665b76246fff6d2d81b7cb51f1d0dcbf607641e5378bf12dbed7849324eb9be"} Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.709529 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.713023 5119 generic.go:358] "Generic (PLEG): container finished" podID="96725402-3741-49bb-a915-6e04fde9ee9d" containerID="35807a8e7a91f8baa7510bd05e82cf99e14a551c2fd79bc6d7509205292b431a" exitCode=0 Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.713072 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" event={"ID":"96725402-3741-49bb-a915-6e04fde9ee9d","Type":"ContainerDied","Data":"35807a8e7a91f8baa7510bd05e82cf99e14a551c2fd79bc6d7509205292b431a"} Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.714545 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-gdq46" podStartSLOduration=2.129636085 podStartE2EDuration="16.714529165s" podCreationTimestamp="2026-01-21 10:06:16 +0000 UTC" firstStartedPulling="2026-01-21 10:06:17.489345173 +0000 UTC m=+693.157436851" lastFinishedPulling="2026-01-21 10:06:32.074238253 +0000 UTC m=+707.742329931" observedRunningTime="2026-01-21 10:06:32.710650087 +0000 UTC m=+708.378741785" watchObservedRunningTime="2026-01-21 10:06:32.714529165 +0000 UTC m=+708.382620843" Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.736984 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-7rzqs" podStartSLOduration=2.2636255690000002 podStartE2EDuration="16.736966742s" podCreationTimestamp="2026-01-21 10:06:16 +0000 UTC" firstStartedPulling="2026-01-21 10:06:17.594888772 +0000 UTC m=+693.262980450" lastFinishedPulling="2026-01-21 10:06:32.068229935 +0000 UTC m=+707.736321623" observedRunningTime="2026-01-21 10:06:32.736548121 +0000 UTC m=+708.404639799" watchObservedRunningTime="2026-01-21 10:06:32.736966742 +0000 UTC m=+708.405058420" Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.743223 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-gdq46" Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.761439 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" podStartSLOduration=2.326786114 podStartE2EDuration="16.761424516s" podCreationTimestamp="2026-01-21 10:06:16 +0000 UTC" firstStartedPulling="2026-01-21 10:06:17.639221431 +0000 UTC m=+693.307313109" lastFinishedPulling="2026-01-21 10:06:32.073859833 +0000 UTC m=+707.741951511" observedRunningTime="2026-01-21 10:06:32.757512217 +0000 UTC m=+708.425603885" watchObservedRunningTime="2026-01-21 10:06:32.761424516 +0000 UTC m=+708.429516194" Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.777171 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-tbcj4" podStartSLOduration=2.252101958 podStartE2EDuration="16.777157556s" podCreationTimestamp="2026-01-21 10:06:16 +0000 UTC" firstStartedPulling="2026-01-21 10:06:17.55831309 +0000 UTC m=+693.226404768" lastFinishedPulling="2026-01-21 10:06:32.083368698 +0000 UTC m=+707.751460366" observedRunningTime="2026-01-21 10:06:32.776098966 +0000 UTC m=+708.444190644" watchObservedRunningTime="2026-01-21 10:06:32.777157556 +0000 UTC m=+708.445249234" Jan 21 10:06:32 crc kubenswrapper[5119]: I0121 10:06:32.836785 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69b968d655-n75gs" podStartSLOduration=2.364012225 podStartE2EDuration="16.836766312s" podCreationTimestamp="2026-01-21 10:06:16 +0000 UTC" firstStartedPulling="2026-01-21 10:06:17.615713714 +0000 UTC m=+693.283805392" lastFinishedPulling="2026-01-21 10:06:32.088467801 +0000 UTC m=+707.756559479" observedRunningTime="2026-01-21 10:06:32.801417043 +0000 UTC m=+708.469508711" watchObservedRunningTime="2026-01-21 10:06:32.836766312 +0000 UTC m=+708.504857990" Jan 21 10:06:33 crc kubenswrapper[5119]: I0121 10:06:33.724868 5119 generic.go:358] "Generic (PLEG): container finished" podID="96725402-3741-49bb-a915-6e04fde9ee9d" containerID="00691047be373d0c6c9cefe64c4edbc1a052e126be52d3c5fe1a62736cb33682" exitCode=0 Jan 21 10:06:33 crc kubenswrapper[5119]: I0121 10:06:33.724959 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" event={"ID":"96725402-3741-49bb-a915-6e04fde9ee9d","Type":"ContainerDied","Data":"00691047be373d0c6c9cefe64c4edbc1a052e126be52d3c5fe1a62736cb33682"} Jan 21 10:06:35 crc kubenswrapper[5119]: I0121 10:06:35.823253 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:35 crc kubenswrapper[5119]: I0121 10:06:35.929365 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-util\") pod \"96725402-3741-49bb-a915-6e04fde9ee9d\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " Jan 21 10:06:35 crc kubenswrapper[5119]: I0121 10:06:35.929454 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz946\" (UniqueName: \"kubernetes.io/projected/96725402-3741-49bb-a915-6e04fde9ee9d-kube-api-access-hz946\") pod \"96725402-3741-49bb-a915-6e04fde9ee9d\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " Jan 21 10:06:35 crc kubenswrapper[5119]: I0121 10:06:35.929593 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-bundle\") pod \"96725402-3741-49bb-a915-6e04fde9ee9d\" (UID: \"96725402-3741-49bb-a915-6e04fde9ee9d\") " Jan 21 10:06:35 crc kubenswrapper[5119]: I0121 10:06:35.931887 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-bundle" (OuterVolumeSpecName: "bundle") pod "96725402-3741-49bb-a915-6e04fde9ee9d" (UID: "96725402-3741-49bb-a915-6e04fde9ee9d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:06:35 crc kubenswrapper[5119]: I0121 10:06:35.939814 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96725402-3741-49bb-a915-6e04fde9ee9d-kube-api-access-hz946" (OuterVolumeSpecName: "kube-api-access-hz946") pod "96725402-3741-49bb-a915-6e04fde9ee9d" (UID: "96725402-3741-49bb-a915-6e04fde9ee9d"). InnerVolumeSpecName "kube-api-access-hz946". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:06:35 crc kubenswrapper[5119]: I0121 10:06:35.951838 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-util" (OuterVolumeSpecName: "util") pod "96725402-3741-49bb-a915-6e04fde9ee9d" (UID: "96725402-3741-49bb-a915-6e04fde9ee9d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:06:36 crc kubenswrapper[5119]: I0121 10:06:36.030980 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:36 crc kubenswrapper[5119]: I0121 10:06:36.031016 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/96725402-3741-49bb-a915-6e04fde9ee9d-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:36 crc kubenswrapper[5119]: I0121 10:06:36.031025 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hz946\" (UniqueName: \"kubernetes.io/projected/96725402-3741-49bb-a915-6e04fde9ee9d-kube-api-access-hz946\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:36 crc kubenswrapper[5119]: I0121 10:06:36.744835 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" event={"ID":"96725402-3741-49bb-a915-6e04fde9ee9d","Type":"ContainerDied","Data":"5e4b7879aab90b9e00da8962ab0f80e4613065b836e48fafb1c2f76e4276dd82"} Jan 21 10:06:36 crc kubenswrapper[5119]: I0121 10:06:36.744880 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e4b7879aab90b9e00da8962ab0f80e4613065b836e48fafb1c2f76e4276dd82" Jan 21 10:06:36 crc kubenswrapper[5119]: I0121 10:06:36.744850 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7" Jan 21 10:06:36 crc kubenswrapper[5119]: I0121 10:06:36.748368 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" event={"ID":"4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f","Type":"ContainerStarted","Data":"57fdab49a3950db124046d3a862401013f83ce98d9e79ccc06dd9b808afc0d5f"} Jan 21 10:06:36 crc kubenswrapper[5119]: I0121 10:06:36.780245 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7978d4ccbd-fg5zr" podStartSLOduration=11.243924529 podStartE2EDuration="14.780229221s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="2026-01-21 10:06:32.300391032 +0000 UTC m=+707.968482710" lastFinishedPulling="2026-01-21 10:06:35.836695724 +0000 UTC m=+711.504787402" observedRunningTime="2026-01-21 10:06:36.778968165 +0000 UTC m=+712.447059843" watchObservedRunningTime="2026-01-21 10:06:36.780229221 +0000 UTC m=+712.448320889" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.774120 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.775480 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96725402-3741-49bb-a915-6e04fde9ee9d" containerName="pull" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.775495 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="96725402-3741-49bb-a915-6e04fde9ee9d" containerName="pull" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.775509 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96725402-3741-49bb-a915-6e04fde9ee9d" containerName="extract" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.775515 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="96725402-3741-49bb-a915-6e04fde9ee9d" containerName="extract" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.775543 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96725402-3741-49bb-a915-6e04fde9ee9d" containerName="util" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.775548 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="96725402-3741-49bb-a915-6e04fde9ee9d" containerName="util" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.775656 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="96725402-3741-49bb-a915-6e04fde9ee9d" containerName="extract" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.809686 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.810745 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.817568 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.817687 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.817833 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.817933 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.818320 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.818498 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-wrpks\"" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.818618 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.818927 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.819078 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876089 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876135 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876175 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876193 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876221 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876319 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876404 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876428 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876456 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876472 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f12216d5-43c7-4e0c-be7a-74aa76900a78-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876667 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876736 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876767 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876795 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.876819 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978172 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978248 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978269 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978306 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978322 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f12216d5-43c7-4e0c-be7a-74aa76900a78-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978355 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978399 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978417 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978461 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978489 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978538 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978571 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978621 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978647 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978707 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.978771 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.979073 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.979361 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.979730 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.979878 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.980760 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.981650 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.982707 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f12216d5-43c7-4e0c-be7a-74aa76900a78-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.986211 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.986890 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.987467 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.992020 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.994325 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:39 crc kubenswrapper[5119]: I0121 10:06:39.997181 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f12216d5-43c7-4e0c-be7a-74aa76900a78-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:40 crc kubenswrapper[5119]: I0121 10:06:40.003431 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f12216d5-43c7-4e0c-be7a-74aa76900a78-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f12216d5-43c7-4e0c-be7a-74aa76900a78\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:40 crc kubenswrapper[5119]: I0121 10:06:40.154124 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:06:41 crc kubenswrapper[5119]: I0121 10:06:41.785007 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-62b8q" event={"ID":"28455f19-bea4-4979-bc01-e9ca6f14c7e6","Type":"ContainerStarted","Data":"7b84e8b0c4d227f2ff226b69b94cb91883d17a6641b0019cccddbf694e27822d"} Jan 21 10:06:41 crc kubenswrapper[5119]: I0121 10:06:41.801524 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-62b8q" podStartSLOduration=7.581271112 podStartE2EDuration="16.801508841s" podCreationTimestamp="2026-01-21 10:06:25 +0000 UTC" firstStartedPulling="2026-01-21 10:06:32.348658872 +0000 UTC m=+708.016750550" lastFinishedPulling="2026-01-21 10:06:41.568896591 +0000 UTC m=+717.236988279" observedRunningTime="2026-01-21 10:06:41.800547625 +0000 UTC m=+717.468639303" watchObservedRunningTime="2026-01-21 10:06:41.801508841 +0000 UTC m=+717.469600519" Jan 21 10:06:41 crc kubenswrapper[5119]: I0121 10:06:41.977559 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 10:06:42 crc kubenswrapper[5119]: I0121 10:06:42.799102 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12216d5-43c7-4e0c-be7a-74aa76900a78","Type":"ContainerStarted","Data":"ae26a9a28d8dcfbb35203bcb3c6680240538ebab4bc698859103e9f109f2cfb9"} Jan 21 10:06:43 crc kubenswrapper[5119]: I0121 10:06:43.728476 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-6kpd9" Jan 21 10:06:45 crc kubenswrapper[5119]: I0121 10:06:45.005015 5119 scope.go:117] "RemoveContainer" containerID="d5b54ddf4bdb1f499d4fd60b317952e9ac2c24159d2ed43fa4e588b7616d3d6b" Jan 21 10:06:46 crc kubenswrapper[5119]: I0121 10:06:46.912416 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 10:06:46 crc kubenswrapper[5119]: I0121 10:06:46.918658 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:46 crc kubenswrapper[5119]: I0121 10:06:46.920865 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:06:46 crc kubenswrapper[5119]: I0121 10:06:46.922102 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 21 10:06:46 crc kubenswrapper[5119]: I0121 10:06:46.922125 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 21 10:06:46 crc kubenswrapper[5119]: I0121 10:06:46.922403 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 21 10:06:46 crc kubenswrapper[5119]: I0121 10:06:46.952659 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.073777 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.073818 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.073842 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.073860 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.073927 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.073974 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.074102 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.074193 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mmzl\" (UniqueName: \"kubernetes.io/projected/f107d213-9e7f-4629-bb11-b615a9518f52-kube-api-access-6mmzl\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.074233 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.074270 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.074301 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.074330 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175106 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175160 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175180 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175201 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175217 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175231 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175244 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175280 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175316 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6mmzl\" (UniqueName: \"kubernetes.io/projected/f107d213-9e7f-4629-bb11-b615a9518f52-kube-api-access-6mmzl\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175339 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175361 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175377 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175578 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.175945 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.176146 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.176211 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.176384 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.176507 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.176654 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.176900 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.177195 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.188493 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.191324 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mmzl\" (UniqueName: \"kubernetes.io/projected/f107d213-9e7f-4629-bb11-b615a9518f52-kube-api-access-6mmzl\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.195673 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.265934 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.698650 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 10:06:47 crc kubenswrapper[5119]: W0121 10:06:47.716102 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf107d213_9e7f_4629_bb11_b615a9518f52.slice/crio-5f3d47e46df3b803b8fec9feed0683c876f6832b03effa271abad4926cd96c43 WatchSource:0}: Error finding container 5f3d47e46df3b803b8fec9feed0683c876f6832b03effa271abad4926cd96c43: Status 404 returned error can't find the container with id 5f3d47e46df3b803b8fec9feed0683c876f6832b03effa271abad4926cd96c43 Jan 21 10:06:47 crc kubenswrapper[5119]: I0121 10:06:47.837091 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"f107d213-9e7f-4629-bb11-b615a9518f52","Type":"ContainerStarted","Data":"5f3d47e46df3b803b8fec9feed0683c876f6832b03effa271abad4926cd96c43"} Jan 21 10:06:49 crc kubenswrapper[5119]: I0121 10:06:49.919149 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:06:49 crc kubenswrapper[5119]: I0121 10:06:49.919449 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:06:49 crc kubenswrapper[5119]: I0121 10:06:49.919491 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:06:49 crc kubenswrapper[5119]: I0121 10:06:49.921024 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fcd4683c1f89fdf153473f37b8eee4ecff9e78a85c4e6b63e9902ab31d6f4a8"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:06:49 crc kubenswrapper[5119]: I0121 10:06:49.921184 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://0fcd4683c1f89fdf153473f37b8eee4ecff9e78a85c4e6b63e9902ab31d6f4a8" gracePeriod=600 Jan 21 10:06:50 crc kubenswrapper[5119]: I0121 10:06:50.856472 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="0fcd4683c1f89fdf153473f37b8eee4ecff9e78a85c4e6b63e9902ab31d6f4a8" exitCode=0 Jan 21 10:06:50 crc kubenswrapper[5119]: I0121 10:06:50.856696 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"0fcd4683c1f89fdf153473f37b8eee4ecff9e78a85c4e6b63e9902ab31d6f4a8"} Jan 21 10:06:50 crc kubenswrapper[5119]: I0121 10:06:50.856734 5119 scope.go:117] "RemoveContainer" containerID="ccf407c7bf9fef5463fbdd0f20c4692fd497cff47399ce80319fb6eadef27ee1" Jan 21 10:06:52 crc kubenswrapper[5119]: I0121 10:06:52.014525 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7"] Jan 21 10:06:52 crc kubenswrapper[5119]: I0121 10:06:52.914561 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7"] Jan 21 10:06:52 crc kubenswrapper[5119]: I0121 10:06:52.914650 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7" Jan 21 10:06:52 crc kubenswrapper[5119]: I0121 10:06:52.917049 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 10:06:52 crc kubenswrapper[5119]: I0121 10:06:52.919929 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-mzzw8\"" Jan 21 10:06:52 crc kubenswrapper[5119]: I0121 10:06:52.920164 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 10:06:53 crc kubenswrapper[5119]: I0121 10:06:53.064986 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9m5\" (UniqueName: \"kubernetes.io/projected/421d76f3-7577-4fa5-8536-f43a4362ca70-kube-api-access-6t9m5\") pod \"cert-manager-operator-controller-manager-64c74584c4-jg5d7\" (UID: \"421d76f3-7577-4fa5-8536-f43a4362ca70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7" Jan 21 10:06:53 crc kubenswrapper[5119]: I0121 10:06:53.065033 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/421d76f3-7577-4fa5-8536-f43a4362ca70-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-jg5d7\" (UID: \"421d76f3-7577-4fa5-8536-f43a4362ca70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7" Jan 21 10:06:53 crc kubenswrapper[5119]: I0121 10:06:53.166361 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6t9m5\" (UniqueName: \"kubernetes.io/projected/421d76f3-7577-4fa5-8536-f43a4362ca70-kube-api-access-6t9m5\") pod \"cert-manager-operator-controller-manager-64c74584c4-jg5d7\" (UID: \"421d76f3-7577-4fa5-8536-f43a4362ca70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7" Jan 21 10:06:53 crc kubenswrapper[5119]: I0121 10:06:53.166580 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/421d76f3-7577-4fa5-8536-f43a4362ca70-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-jg5d7\" (UID: \"421d76f3-7577-4fa5-8536-f43a4362ca70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7" Jan 21 10:06:53 crc kubenswrapper[5119]: I0121 10:06:53.167452 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/421d76f3-7577-4fa5-8536-f43a4362ca70-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-jg5d7\" (UID: \"421d76f3-7577-4fa5-8536-f43a4362ca70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7" Jan 21 10:06:53 crc kubenswrapper[5119]: I0121 10:06:53.186090 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t9m5\" (UniqueName: \"kubernetes.io/projected/421d76f3-7577-4fa5-8536-f43a4362ca70-kube-api-access-6t9m5\") pod \"cert-manager-operator-controller-manager-64c74584c4-jg5d7\" (UID: \"421d76f3-7577-4fa5-8536-f43a4362ca70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7" Jan 21 10:06:53 crc kubenswrapper[5119]: I0121 10:06:53.234395 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7" Jan 21 10:06:57 crc kubenswrapper[5119]: I0121 10:06:57.392922 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 10:06:59 crc kubenswrapper[5119]: I0121 10:06:59.625924 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.599886 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.600078 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.603391 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.603935 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.604474 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.662833 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.662874 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.662894 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.662934 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.663129 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.663152 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.663170 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.663216 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.663231 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.663246 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.663259 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdv4g\" (UniqueName: \"kubernetes.io/projected/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-kube-api-access-fdv4g\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.663311 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.765738 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.765820 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.766103 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.766191 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.766224 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.766263 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.766346 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.766376 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.766410 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.766445 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fdv4g\" (UniqueName: \"kubernetes.io/projected/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-kube-api-access-fdv4g\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.766540 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.766642 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.766773 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.767014 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.767075 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.767385 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.767521 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.767660 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.767885 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.768130 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.768390 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.777011 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.779794 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.787619 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdv4g\" (UniqueName: \"kubernetes.io/projected/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-kube-api-access-fdv4g\") pod \"service-telemetry-operator-2-build\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:00 crc kubenswrapper[5119]: I0121 10:07:00.932972 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:07:07 crc kubenswrapper[5119]: I0121 10:07:07.147559 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7"] Jan 21 10:07:07 crc kubenswrapper[5119]: W0121 10:07:07.229650 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod421d76f3_7577_4fa5_8536_f43a4362ca70.slice/crio-2665326012dd1cf4e0108b127c06312e098153bd0067712afdbfe87d11b9dc56 WatchSource:0}: Error finding container 2665326012dd1cf4e0108b127c06312e098153bd0067712afdbfe87d11b9dc56: Status 404 returned error can't find the container with id 2665326012dd1cf4e0108b127c06312e098153bd0067712afdbfe87d11b9dc56 Jan 21 10:07:07 crc kubenswrapper[5119]: I0121 10:07:07.581713 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 10:07:07 crc kubenswrapper[5119]: W0121 10:07:07.585883 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod568d37ef_0166_4215_b0c1_ed9c9db7a3a1.slice/crio-8d780bed7dc37005364ba5d012e7ce6027ff63bc8ef6f8b135e1058ed3390c29 WatchSource:0}: Error finding container 8d780bed7dc37005364ba5d012e7ce6027ff63bc8ef6f8b135e1058ed3390c29: Status 404 returned error can't find the container with id 8d780bed7dc37005364ba5d012e7ce6027ff63bc8ef6f8b135e1058ed3390c29 Jan 21 10:07:07 crc kubenswrapper[5119]: I0121 10:07:07.965637 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"568d37ef-0166-4215-b0c1-ed9c9db7a3a1","Type":"ContainerStarted","Data":"2a11a658d107add311c4b47f4c26be140ecdb6cf3ad9a9ac95670fd30ebf2a29"} Jan 21 10:07:07 crc kubenswrapper[5119]: I0121 10:07:07.966033 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"568d37ef-0166-4215-b0c1-ed9c9db7a3a1","Type":"ContainerStarted","Data":"8d780bed7dc37005364ba5d012e7ce6027ff63bc8ef6f8b135e1058ed3390c29"} Jan 21 10:07:07 crc kubenswrapper[5119]: I0121 10:07:07.967198 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"f107d213-9e7f-4629-bb11-b615a9518f52","Type":"ContainerStarted","Data":"7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554"} Jan 21 10:07:07 crc kubenswrapper[5119]: I0121 10:07:07.967286 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="f107d213-9e7f-4629-bb11-b615a9518f52" containerName="manage-dockerfile" containerID="cri-o://7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554" gracePeriod=30 Jan 21 10:07:07 crc kubenswrapper[5119]: I0121 10:07:07.968856 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7" event={"ID":"421d76f3-7577-4fa5-8536-f43a4362ca70","Type":"ContainerStarted","Data":"2665326012dd1cf4e0108b127c06312e098153bd0067712afdbfe87d11b9dc56"} Jan 21 10:07:07 crc kubenswrapper[5119]: I0121 10:07:07.970281 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12216d5-43c7-4e0c-be7a-74aa76900a78","Type":"ContainerStarted","Data":"4552b2d97e0266f522641587ca40dc21900df924868d03fcb22578d003491adf"} Jan 21 10:07:07 crc kubenswrapper[5119]: I0121 10:07:07.975279 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"90484e72f46c3fbcf88c1033b1658a1d68108d21cb7c6bed596a53764123001b"} Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.169145 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.208561 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.425661 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_f107d213-9e7f-4629-bb11-b615a9518f52/manage-dockerfile/0.log" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.425742 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.587712 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-pull\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.587746 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-proxy-ca-bundles\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.587785 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-buildworkdir\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.587856 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-system-configs\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.587939 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-push\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.587956 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-build-blob-cache\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.587976 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mmzl\" (UniqueName: \"kubernetes.io/projected/f107d213-9e7f-4629-bb11-b615a9518f52-kube-api-access-6mmzl\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.587989 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-buildcachedir\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.588037 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-node-pullsecrets\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.588058 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-root\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.588118 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-ca-bundles\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.588138 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-run\") pod \"f107d213-9e7f-4629-bb11-b615a9518f52\" (UID: \"f107d213-9e7f-4629-bb11-b615a9518f52\") " Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.588644 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.588906 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.589045 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.589075 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.589109 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.589509 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.589960 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.590167 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.590216 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.590423 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.592086 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.595002 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.595492 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.600271 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f107d213-9e7f-4629-bb11-b615a9518f52-kube-api-access-6mmzl" (OuterVolumeSpecName: "kube-api-access-6mmzl") pod "f107d213-9e7f-4629-bb11-b615a9518f52" (UID: "f107d213-9e7f-4629-bb11-b615a9518f52"). InnerVolumeSpecName "kube-api-access-6mmzl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.693310 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.693343 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6mmzl\" (UniqueName: \"kubernetes.io/projected/f107d213-9e7f-4629-bb11-b615a9518f52-kube-api-access-6mmzl\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.693353 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f107d213-9e7f-4629-bb11-b615a9518f52-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.693361 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.693370 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.693380 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.693389 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/f107d213-9e7f-4629-bb11-b615a9518f52-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.693397 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.693406 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f107d213-9e7f-4629-bb11-b615a9518f52-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:08 crc kubenswrapper[5119]: I0121 10:07:08.693414 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f107d213-9e7f-4629-bb11-b615a9518f52-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:09 crc kubenswrapper[5119]: I0121 10:07:09.007241 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_f107d213-9e7f-4629-bb11-b615a9518f52/manage-dockerfile/0.log" Jan 21 10:07:09 crc kubenswrapper[5119]: I0121 10:07:09.007291 5119 generic.go:358] "Generic (PLEG): container finished" podID="f107d213-9e7f-4629-bb11-b615a9518f52" containerID="7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554" exitCode=1 Jan 21 10:07:09 crc kubenswrapper[5119]: I0121 10:07:09.007902 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"f107d213-9e7f-4629-bb11-b615a9518f52","Type":"ContainerDied","Data":"7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554"} Jan 21 10:07:09 crc kubenswrapper[5119]: I0121 10:07:09.008249 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"f107d213-9e7f-4629-bb11-b615a9518f52","Type":"ContainerDied","Data":"5f3d47e46df3b803b8fec9feed0683c876f6832b03effa271abad4926cd96c43"} Jan 21 10:07:09 crc kubenswrapper[5119]: I0121 10:07:09.008300 5119 scope.go:117] "RemoveContainer" containerID="7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554" Jan 21 10:07:09 crc kubenswrapper[5119]: I0121 10:07:09.008557 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 10:07:09 crc kubenswrapper[5119]: I0121 10:07:09.047392 5119 scope.go:117] "RemoveContainer" containerID="7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554" Jan 21 10:07:09 crc kubenswrapper[5119]: E0121 10:07:09.048415 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554\": container with ID starting with 7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554 not found: ID does not exist" containerID="7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554" Jan 21 10:07:09 crc kubenswrapper[5119]: I0121 10:07:09.048490 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554"} err="failed to get container status \"7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554\": rpc error: code = NotFound desc = could not find container \"7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554\": container with ID starting with 7f2c7f10dee147859bb1a22e89fd1a1a77c5a410d1bae413c3ca7a8aa7e17554 not found: ID does not exist" Jan 21 10:07:09 crc kubenswrapper[5119]: I0121 10:07:09.055069 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 10:07:09 crc kubenswrapper[5119]: I0121 10:07:09.059334 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 10:07:10 crc kubenswrapper[5119]: I0121 10:07:10.019445 5119 generic.go:358] "Generic (PLEG): container finished" podID="f12216d5-43c7-4e0c-be7a-74aa76900a78" containerID="4552b2d97e0266f522641587ca40dc21900df924868d03fcb22578d003491adf" exitCode=0 Jan 21 10:07:10 crc kubenswrapper[5119]: I0121 10:07:10.019595 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12216d5-43c7-4e0c-be7a-74aa76900a78","Type":"ContainerDied","Data":"4552b2d97e0266f522641587ca40dc21900df924868d03fcb22578d003491adf"} Jan 21 10:07:10 crc kubenswrapper[5119]: I0121 10:07:10.609281 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f107d213-9e7f-4629-bb11-b615a9518f52" path="/var/lib/kubelet/pods/f107d213-9e7f-4629-bb11-b615a9518f52/volumes" Jan 21 10:07:11 crc kubenswrapper[5119]: I0121 10:07:11.034693 5119 generic.go:358] "Generic (PLEG): container finished" podID="f12216d5-43c7-4e0c-be7a-74aa76900a78" containerID="3203c203a288db7861c4b5fe684389a77f789c268b3eac80b38cefb3a1dec1f8" exitCode=0 Jan 21 10:07:11 crc kubenswrapper[5119]: I0121 10:07:11.034855 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12216d5-43c7-4e0c-be7a-74aa76900a78","Type":"ContainerDied","Data":"3203c203a288db7861c4b5fe684389a77f789c268b3eac80b38cefb3a1dec1f8"} Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.094208 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7" event={"ID":"421d76f3-7577-4fa5-8536-f43a4362ca70","Type":"ContainerStarted","Data":"dba12f56da5a07f7ba150ba15bf6b528335a42f22b53bccdfa74a0141bd920e1"} Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.096801 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12216d5-43c7-4e0c-be7a-74aa76900a78","Type":"ContainerStarted","Data":"4abf014a3ddbaaf1a1cdfe3fe320eb4a50ff257aa47cc1018e65410ff4f1fe03"} Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.097879 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.119847 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-jg5d7" podStartSLOduration=16.721959702 podStartE2EDuration="24.11982818s" podCreationTimestamp="2026-01-21 10:06:51 +0000 UTC" firstStartedPulling="2026-01-21 10:07:07.232404144 +0000 UTC m=+742.900495812" lastFinishedPulling="2026-01-21 10:07:14.630272612 +0000 UTC m=+750.298364290" observedRunningTime="2026-01-21 10:07:15.116807654 +0000 UTC m=+750.784899342" watchObservedRunningTime="2026-01-21 10:07:15.11982818 +0000 UTC m=+750.787919868" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.156882 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=10.835079095 podStartE2EDuration="36.156860796s" podCreationTimestamp="2026-01-21 10:06:39 +0000 UTC" firstStartedPulling="2026-01-21 10:06:41.991405655 +0000 UTC m=+717.659497323" lastFinishedPulling="2026-01-21 10:07:07.313187346 +0000 UTC m=+742.981279024" observedRunningTime="2026-01-21 10:07:15.143293633 +0000 UTC m=+750.811385331" watchObservedRunningTime="2026-01-21 10:07:15.156860796 +0000 UTC m=+750.824952484" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.748138 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b2sg2"] Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.748704 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f107d213-9e7f-4629-bb11-b615a9518f52" containerName="manage-dockerfile" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.748720 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="f107d213-9e7f-4629-bb11-b615a9518f52" containerName="manage-dockerfile" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.748826 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="f107d213-9e7f-4629-bb11-b615a9518f52" containerName="manage-dockerfile" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.752399 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.800528 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-utilities\") pod \"certified-operators-b2sg2\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.800591 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-catalog-content\") pod \"certified-operators-b2sg2\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.800660 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gm6q\" (UniqueName: \"kubernetes.io/projected/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-kube-api-access-4gm6q\") pod \"certified-operators-b2sg2\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.874421 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b2sg2"] Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.901625 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4gm6q\" (UniqueName: \"kubernetes.io/projected/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-kube-api-access-4gm6q\") pod \"certified-operators-b2sg2\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.901953 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-utilities\") pod \"certified-operators-b2sg2\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.902056 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-catalog-content\") pod \"certified-operators-b2sg2\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.902496 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-utilities\") pod \"certified-operators-b2sg2\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.902693 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-catalog-content\") pod \"certified-operators-b2sg2\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:15 crc kubenswrapper[5119]: I0121 10:07:15.924352 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gm6q\" (UniqueName: \"kubernetes.io/projected/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-kube-api-access-4gm6q\") pod \"certified-operators-b2sg2\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:16 crc kubenswrapper[5119]: I0121 10:07:16.066353 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:16 crc kubenswrapper[5119]: I0121 10:07:16.604480 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b2sg2"] Jan 21 10:07:17 crc kubenswrapper[5119]: I0121 10:07:17.114713 5119 generic.go:358] "Generic (PLEG): container finished" podID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" containerID="e7390156037cb15bf74270db62ae3bab0e2646206cc03874e47e2707e72b4ecb" exitCode=0 Jan 21 10:07:17 crc kubenswrapper[5119]: I0121 10:07:17.114931 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2sg2" event={"ID":"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa","Type":"ContainerDied","Data":"e7390156037cb15bf74270db62ae3bab0e2646206cc03874e47e2707e72b4ecb"} Jan 21 10:07:17 crc kubenswrapper[5119]: I0121 10:07:17.115233 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2sg2" event={"ID":"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa","Type":"ContainerStarted","Data":"e140fa0439da8bf53bef4827827171fbd446c0e51e93bf51e70f3eded816ef05"} Jan 21 10:07:18 crc kubenswrapper[5119]: I0121 10:07:18.132107 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2sg2" event={"ID":"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa","Type":"ContainerStarted","Data":"c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5"} Jan 21 10:07:19 crc kubenswrapper[5119]: I0121 10:07:19.140323 5119 generic.go:358] "Generic (PLEG): container finished" podID="568d37ef-0166-4215-b0c1-ed9c9db7a3a1" containerID="2a11a658d107add311c4b47f4c26be140ecdb6cf3ad9a9ac95670fd30ebf2a29" exitCode=0 Jan 21 10:07:19 crc kubenswrapper[5119]: I0121 10:07:19.140409 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"568d37ef-0166-4215-b0c1-ed9c9db7a3a1","Type":"ContainerDied","Data":"2a11a658d107add311c4b47f4c26be140ecdb6cf3ad9a9ac95670fd30ebf2a29"} Jan 21 10:07:19 crc kubenswrapper[5119]: I0121 10:07:19.143053 5119 generic.go:358] "Generic (PLEG): container finished" podID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" containerID="c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5" exitCode=0 Jan 21 10:07:19 crc kubenswrapper[5119]: I0121 10:07:19.143106 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2sg2" event={"ID":"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa","Type":"ContainerDied","Data":"c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5"} Jan 21 10:07:19 crc kubenswrapper[5119]: I0121 10:07:19.787468 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8"] Jan 21 10:07:19 crc kubenswrapper[5119]: I0121 10:07:19.972047 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8"] Jan 21 10:07:19 crc kubenswrapper[5119]: I0121 10:07:19.972175 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" Jan 21 10:07:19 crc kubenswrapper[5119]: I0121 10:07:19.974094 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-mrd7b\"" Jan 21 10:07:19 crc kubenswrapper[5119]: I0121 10:07:19.974349 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 21 10:07:19 crc kubenswrapper[5119]: I0121 10:07:19.974830 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.054055 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9nbw\" (UniqueName: \"kubernetes.io/projected/9ab82c58-d623-4b22-aae4-4f8c744cb42d-kube-api-access-g9nbw\") pod \"cert-manager-webhook-7894b5b9b4-ps4d8\" (UID: \"9ab82c58-d623-4b22-aae4-4f8c744cb42d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.054142 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ab82c58-d623-4b22-aae4-4f8c744cb42d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-ps4d8\" (UID: \"9ab82c58-d623-4b22-aae4-4f8c744cb42d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.151180 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2sg2" event={"ID":"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa","Type":"ContainerStarted","Data":"983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6"} Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.152999 5119 generic.go:358] "Generic (PLEG): container finished" podID="568d37ef-0166-4215-b0c1-ed9c9db7a3a1" containerID="2305ba98319ba5297bd5013ad379fc92b7cc43f033f58357c00a187d95ff26b8" exitCode=0 Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.153016 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"568d37ef-0166-4215-b0c1-ed9c9db7a3a1","Type":"ContainerDied","Data":"2305ba98319ba5297bd5013ad379fc92b7cc43f033f58357c00a187d95ff26b8"} Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.158339 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g9nbw\" (UniqueName: \"kubernetes.io/projected/9ab82c58-d623-4b22-aae4-4f8c744cb42d-kube-api-access-g9nbw\") pod \"cert-manager-webhook-7894b5b9b4-ps4d8\" (UID: \"9ab82c58-d623-4b22-aae4-4f8c744cb42d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.158395 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ab82c58-d623-4b22-aae4-4f8c744cb42d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-ps4d8\" (UID: \"9ab82c58-d623-4b22-aae4-4f8c744cb42d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.180210 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b2sg2" podStartSLOduration=4.324624397 podStartE2EDuration="5.180193133s" podCreationTimestamp="2026-01-21 10:07:15 +0000 UTC" firstStartedPulling="2026-01-21 10:07:17.115665283 +0000 UTC m=+752.783756961" lastFinishedPulling="2026-01-21 10:07:17.971233999 +0000 UTC m=+753.639325697" observedRunningTime="2026-01-21 10:07:20.179781702 +0000 UTC m=+755.847873380" watchObservedRunningTime="2026-01-21 10:07:20.180193133 +0000 UTC m=+755.848284811" Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.185806 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ab82c58-d623-4b22-aae4-4f8c744cb42d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-ps4d8\" (UID: \"9ab82c58-d623-4b22-aae4-4f8c744cb42d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.186338 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9nbw\" (UniqueName: \"kubernetes.io/projected/9ab82c58-d623-4b22-aae4-4f8c744cb42d-kube-api-access-g9nbw\") pod \"cert-manager-webhook-7894b5b9b4-ps4d8\" (UID: \"9ab82c58-d623-4b22-aae4-4f8c744cb42d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.233674 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_568d37ef-0166-4215-b0c1-ed9c9db7a3a1/manage-dockerfile/0.log" Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.286436 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.710336 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8"] Jan 21 10:07:20 crc kubenswrapper[5119]: W0121 10:07:20.719901 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ab82c58_d623_4b22_aae4_4f8c744cb42d.slice/crio-222f4231b5adbe2fe67e539487af38af5a7a9ed7f22bc765c05f3c45e9a10f36 WatchSource:0}: Error finding container 222f4231b5adbe2fe67e539487af38af5a7a9ed7f22bc765c05f3c45e9a10f36: Status 404 returned error can't find the container with id 222f4231b5adbe2fe67e539487af38af5a7a9ed7f22bc765c05f3c45e9a10f36 Jan 21 10:07:20 crc kubenswrapper[5119]: I0121 10:07:20.798005 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq"] Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.456188 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"568d37ef-0166-4215-b0c1-ed9c9db7a3a1","Type":"ContainerStarted","Data":"5f77e5835cb06f4b41bae3f3cc6f9a85d3dd6b2ee8a301462b66353c7ba1466b"} Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.456510 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" event={"ID":"9ab82c58-d623-4b22-aae4-4f8c744cb42d","Type":"ContainerStarted","Data":"222f4231b5adbe2fe67e539487af38af5a7a9ed7f22bc765c05f3c45e9a10f36"} Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.456530 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq"] Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.457585 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq" Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.459291 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-vzw44\"" Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.496536 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=22.496512523 podStartE2EDuration="22.496512523s" podCreationTimestamp="2026-01-21 10:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:21.493856038 +0000 UTC m=+757.161947736" watchObservedRunningTime="2026-01-21 10:07:21.496512523 +0000 UTC m=+757.164604211" Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.575744 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjmx\" (UniqueName: \"kubernetes.io/projected/d9416295-81e6-488c-b079-97d7ba7c4f3e-kube-api-access-bgjmx\") pod \"cert-manager-cainjector-7dbf76d5c8-vv5mq\" (UID: \"d9416295-81e6-488c-b079-97d7ba7c4f3e\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq" Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.575871 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9416295-81e6-488c-b079-97d7ba7c4f3e-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-vv5mq\" (UID: \"d9416295-81e6-488c-b079-97d7ba7c4f3e\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq" Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.677145 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjmx\" (UniqueName: \"kubernetes.io/projected/d9416295-81e6-488c-b079-97d7ba7c4f3e-kube-api-access-bgjmx\") pod \"cert-manager-cainjector-7dbf76d5c8-vv5mq\" (UID: \"d9416295-81e6-488c-b079-97d7ba7c4f3e\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq" Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.677205 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9416295-81e6-488c-b079-97d7ba7c4f3e-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-vv5mq\" (UID: \"d9416295-81e6-488c-b079-97d7ba7c4f3e\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq" Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.697251 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjmx\" (UniqueName: \"kubernetes.io/projected/d9416295-81e6-488c-b079-97d7ba7c4f3e-kube-api-access-bgjmx\") pod \"cert-manager-cainjector-7dbf76d5c8-vv5mq\" (UID: \"d9416295-81e6-488c-b079-97d7ba7c4f3e\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq" Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.723182 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9416295-81e6-488c-b079-97d7ba7c4f3e-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-vv5mq\" (UID: \"d9416295-81e6-488c-b079-97d7ba7c4f3e\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq" Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.775160 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq" Jan 21 10:07:21 crc kubenswrapper[5119]: I0121 10:07:21.863457 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9t8lg"] Jan 21 10:07:22 crc kubenswrapper[5119]: I0121 10:07:22.722350 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:22 crc kubenswrapper[5119]: I0121 10:07:22.731898 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t8lg"] Jan 21 10:07:22 crc kubenswrapper[5119]: I0121 10:07:22.791727 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-utilities\") pod \"community-operators-9t8lg\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:22 crc kubenswrapper[5119]: I0121 10:07:22.791800 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-catalog-content\") pod \"community-operators-9t8lg\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:22 crc kubenswrapper[5119]: I0121 10:07:22.791832 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4nhq\" (UniqueName: \"kubernetes.io/projected/2b9223d3-c96e-4967-9bb5-a877a5635e02-kube-api-access-t4nhq\") pod \"community-operators-9t8lg\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:22 crc kubenswrapper[5119]: I0121 10:07:22.893573 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-utilities\") pod \"community-operators-9t8lg\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:22 crc kubenswrapper[5119]: I0121 10:07:22.893655 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-catalog-content\") pod \"community-operators-9t8lg\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:22 crc kubenswrapper[5119]: I0121 10:07:22.893680 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t4nhq\" (UniqueName: \"kubernetes.io/projected/2b9223d3-c96e-4967-9bb5-a877a5635e02-kube-api-access-t4nhq\") pod \"community-operators-9t8lg\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:22 crc kubenswrapper[5119]: I0121 10:07:22.894206 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-catalog-content\") pod \"community-operators-9t8lg\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:22 crc kubenswrapper[5119]: I0121 10:07:22.894451 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-utilities\") pod \"community-operators-9t8lg\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:22 crc kubenswrapper[5119]: I0121 10:07:22.923203 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4nhq\" (UniqueName: \"kubernetes.io/projected/2b9223d3-c96e-4967-9bb5-a877a5635e02-kube-api-access-t4nhq\") pod \"community-operators-9t8lg\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:23 crc kubenswrapper[5119]: I0121 10:07:23.037196 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:24 crc kubenswrapper[5119]: I0121 10:07:24.500125 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq"] Jan 21 10:07:24 crc kubenswrapper[5119]: I0121 10:07:24.557659 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t8lg"] Jan 21 10:07:24 crc kubenswrapper[5119]: W0121 10:07:24.561117 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b9223d3_c96e_4967_9bb5_a877a5635e02.slice/crio-6447725aa808262657fc48c3e95475b92a7f5b7e1e1803092845ce45d1aa8198 WatchSource:0}: Error finding container 6447725aa808262657fc48c3e95475b92a7f5b7e1e1803092845ce45d1aa8198: Status 404 returned error can't find the container with id 6447725aa808262657fc48c3e95475b92a7f5b7e1e1803092845ce45d1aa8198 Jan 21 10:07:25 crc kubenswrapper[5119]: I0121 10:07:25.201406 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq" event={"ID":"d9416295-81e6-488c-b079-97d7ba7c4f3e","Type":"ContainerStarted","Data":"ed9be2be754295c8167e3a7bbe8da80a2cc23b2b6cf4ea8c24d3fb204ae02692"} Jan 21 10:07:25 crc kubenswrapper[5119]: I0121 10:07:25.202729 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t8lg" event={"ID":"2b9223d3-c96e-4967-9bb5-a877a5635e02","Type":"ContainerStarted","Data":"6447725aa808262657fc48c3e95475b92a7f5b7e1e1803092845ce45d1aa8198"} Jan 21 10:07:26 crc kubenswrapper[5119]: I0121 10:07:26.067020 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:26 crc kubenswrapper[5119]: I0121 10:07:26.067274 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:26 crc kubenswrapper[5119]: I0121 10:07:26.122469 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:26 crc kubenswrapper[5119]: I0121 10:07:26.216153 5119 generic.go:358] "Generic (PLEG): container finished" podID="2b9223d3-c96e-4967-9bb5-a877a5635e02" containerID="afab34348795c31bc885ce7c875c9c2b54eb4832e99094268e80ebaebe21ce74" exitCode=0 Jan 21 10:07:26 crc kubenswrapper[5119]: I0121 10:07:26.216268 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t8lg" event={"ID":"2b9223d3-c96e-4967-9bb5-a877a5635e02","Type":"ContainerDied","Data":"afab34348795c31bc885ce7c875c9c2b54eb4832e99094268e80ebaebe21ce74"} Jan 21 10:07:26 crc kubenswrapper[5119]: I0121 10:07:26.282857 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:27 crc kubenswrapper[5119]: I0121 10:07:27.212124 5119 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f12216d5-43c7-4e0c-be7a-74aa76900a78" containerName="elasticsearch" probeResult="failure" output=< Jan 21 10:07:27 crc kubenswrapper[5119]: {"timestamp": "2026-01-21T10:07:27+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 10:07:27 crc kubenswrapper[5119]: > Jan 21 10:07:27 crc kubenswrapper[5119]: I0121 10:07:27.227320 5119 generic.go:358] "Generic (PLEG): container finished" podID="2b9223d3-c96e-4967-9bb5-a877a5635e02" containerID="aeae7b92d050f0394d7e15c35347e09ed2f7b212a6284574317532acec3e8d65" exitCode=0 Jan 21 10:07:27 crc kubenswrapper[5119]: I0121 10:07:27.227437 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t8lg" event={"ID":"2b9223d3-c96e-4967-9bb5-a877a5635e02","Type":"ContainerDied","Data":"aeae7b92d050f0394d7e15c35347e09ed2f7b212a6284574317532acec3e8d65"} Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.237724 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t8lg" event={"ID":"2b9223d3-c96e-4967-9bb5-a877a5635e02","Type":"ContainerStarted","Data":"6b0db893a6daf0d05a8b0109da764ad19193de5a547ca9c32f48f24922c85a67"} Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.265837 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b2sg2"] Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.266094 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b2sg2" podUID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" containerName="registry-server" containerID="cri-o://983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6" gracePeriod=2 Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.267397 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9t8lg" podStartSLOduration=6.719422054 podStartE2EDuration="7.267386841s" podCreationTimestamp="2026-01-21 10:07:21 +0000 UTC" firstStartedPulling="2026-01-21 10:07:26.216988176 +0000 UTC m=+761.885079854" lastFinishedPulling="2026-01-21 10:07:26.764952963 +0000 UTC m=+762.433044641" observedRunningTime="2026-01-21 10:07:28.256231066 +0000 UTC m=+763.924322734" watchObservedRunningTime="2026-01-21 10:07:28.267386841 +0000 UTC m=+763.935478519" Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.724004 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.911653 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-utilities\") pod \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.912478 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-utilities" (OuterVolumeSpecName: "utilities") pod "c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" (UID: "c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.912695 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gm6q\" (UniqueName: \"kubernetes.io/projected/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-kube-api-access-4gm6q\") pod \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.912747 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-catalog-content\") pod \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\" (UID: \"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa\") " Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.912919 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.924752 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-kube-api-access-4gm6q" (OuterVolumeSpecName: "kube-api-access-4gm6q") pod "c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" (UID: "c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa"). InnerVolumeSpecName "kube-api-access-4gm6q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:07:28 crc kubenswrapper[5119]: I0121 10:07:28.946532 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" (UID: "c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.013903 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.013939 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4gm6q\" (UniqueName: \"kubernetes.io/projected/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa-kube-api-access-4gm6q\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.254767 5119 generic.go:358] "Generic (PLEG): container finished" podID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" containerID="983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6" exitCode=0 Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.255070 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2sg2" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.254957 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2sg2" event={"ID":"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa","Type":"ContainerDied","Data":"983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6"} Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.255667 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2sg2" event={"ID":"c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa","Type":"ContainerDied","Data":"e140fa0439da8bf53bef4827827171fbd446c0e51e93bf51e70f3eded816ef05"} Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.255688 5119 scope.go:117] "RemoveContainer" containerID="983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.291641 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b2sg2"] Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.295568 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b2sg2"] Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.888185 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-87w9p"] Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.888770 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" containerName="extract-content" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.888781 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" containerName="extract-content" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.888788 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" containerName="registry-server" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.888794 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" containerName="registry-server" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.888810 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" containerName="extract-utilities" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.888815 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" containerName="extract-utilities" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.888912 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" containerName="registry-server" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.893353 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-87w9p" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.895364 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-dmkmz\"" Jan 21 10:07:29 crc kubenswrapper[5119]: I0121 10:07:29.898116 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-87w9p"] Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.027778 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e419bfea-ad6b-452c-a894-952a01ea8429-bound-sa-token\") pod \"cert-manager-858d87f86b-87w9p\" (UID: \"e419bfea-ad6b-452c-a894-952a01ea8429\") " pod="cert-manager/cert-manager-858d87f86b-87w9p" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.028071 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9ptw\" (UniqueName: \"kubernetes.io/projected/e419bfea-ad6b-452c-a894-952a01ea8429-kube-api-access-x9ptw\") pod \"cert-manager-858d87f86b-87w9p\" (UID: \"e419bfea-ad6b-452c-a894-952a01ea8429\") " pod="cert-manager/cert-manager-858d87f86b-87w9p" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.128829 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x9ptw\" (UniqueName: \"kubernetes.io/projected/e419bfea-ad6b-452c-a894-952a01ea8429-kube-api-access-x9ptw\") pod \"cert-manager-858d87f86b-87w9p\" (UID: \"e419bfea-ad6b-452c-a894-952a01ea8429\") " pod="cert-manager/cert-manager-858d87f86b-87w9p" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.128869 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e419bfea-ad6b-452c-a894-952a01ea8429-bound-sa-token\") pod \"cert-manager-858d87f86b-87w9p\" (UID: \"e419bfea-ad6b-452c-a894-952a01ea8429\") " pod="cert-manager/cert-manager-858d87f86b-87w9p" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.146356 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e419bfea-ad6b-452c-a894-952a01ea8429-bound-sa-token\") pod \"cert-manager-858d87f86b-87w9p\" (UID: \"e419bfea-ad6b-452c-a894-952a01ea8429\") " pod="cert-manager/cert-manager-858d87f86b-87w9p" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.149756 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9ptw\" (UniqueName: \"kubernetes.io/projected/e419bfea-ad6b-452c-a894-952a01ea8429-kube-api-access-x9ptw\") pod \"cert-manager-858d87f86b-87w9p\" (UID: \"e419bfea-ad6b-452c-a894-952a01ea8429\") " pod="cert-manager/cert-manager-858d87f86b-87w9p" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.207554 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-87w9p" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.462420 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qkh5d"] Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.546718 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkh5d"] Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.546932 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.598970 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa" path="/var/lib/kubelet/pods/c2bf0c0e-e84d-4fd4-b8f1-219a3fd3b1aa/volumes" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.737843 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldkxn\" (UniqueName: \"kubernetes.io/projected/3f134055-6481-4b56-9172-cb137e1fefa7-kube-api-access-ldkxn\") pod \"redhat-operators-qkh5d\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.738067 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-utilities\") pod \"redhat-operators-qkh5d\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.738112 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-catalog-content\") pod \"redhat-operators-qkh5d\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.839895 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-utilities\") pod \"redhat-operators-qkh5d\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.840254 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-catalog-content\") pod \"redhat-operators-qkh5d\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.840296 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldkxn\" (UniqueName: \"kubernetes.io/projected/3f134055-6481-4b56-9172-cb137e1fefa7-kube-api-access-ldkxn\") pod \"redhat-operators-qkh5d\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.840308 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-utilities\") pod \"redhat-operators-qkh5d\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.840708 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-catalog-content\") pod \"redhat-operators-qkh5d\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.862558 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldkxn\" (UniqueName: \"kubernetes.io/projected/3f134055-6481-4b56-9172-cb137e1fefa7-kube-api-access-ldkxn\") pod \"redhat-operators-qkh5d\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:30 crc kubenswrapper[5119]: I0121 10:07:30.863068 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:32 crc kubenswrapper[5119]: I0121 10:07:32.315732 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:07:33 crc kubenswrapper[5119]: I0121 10:07:33.037835 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:33 crc kubenswrapper[5119]: I0121 10:07:33.038212 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:33 crc kubenswrapper[5119]: I0121 10:07:33.104775 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:33 crc kubenswrapper[5119]: I0121 10:07:33.331128 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:34 crc kubenswrapper[5119]: I0121 10:07:34.657721 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t8lg"] Jan 21 10:07:35 crc kubenswrapper[5119]: I0121 10:07:35.305255 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9t8lg" podUID="2b9223d3-c96e-4967-9bb5-a877a5635e02" containerName="registry-server" containerID="cri-o://6b0db893a6daf0d05a8b0109da764ad19193de5a547ca9c32f48f24922c85a67" gracePeriod=2 Jan 21 10:07:37 crc kubenswrapper[5119]: I0121 10:07:37.326863 5119 generic.go:358] "Generic (PLEG): container finished" podID="2b9223d3-c96e-4967-9bb5-a877a5635e02" containerID="6b0db893a6daf0d05a8b0109da764ad19193de5a547ca9c32f48f24922c85a67" exitCode=0 Jan 21 10:07:37 crc kubenswrapper[5119]: I0121 10:07:37.326954 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t8lg" event={"ID":"2b9223d3-c96e-4967-9bb5-a877a5635e02","Type":"ContainerDied","Data":"6b0db893a6daf0d05a8b0109da764ad19193de5a547ca9c32f48f24922c85a67"} Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.536888 5119 scope.go:117] "RemoveContainer" containerID="c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.575579 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.669808 5119 scope.go:117] "RemoveContainer" containerID="e7390156037cb15bf74270db62ae3bab0e2646206cc03874e47e2707e72b4ecb" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.670065 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-catalog-content\") pod \"2b9223d3-c96e-4967-9bb5-a877a5635e02\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.670331 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-utilities\") pod \"2b9223d3-c96e-4967-9bb5-a877a5635e02\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.670406 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4nhq\" (UniqueName: \"kubernetes.io/projected/2b9223d3-c96e-4967-9bb5-a877a5635e02-kube-api-access-t4nhq\") pod \"2b9223d3-c96e-4967-9bb5-a877a5635e02\" (UID: \"2b9223d3-c96e-4967-9bb5-a877a5635e02\") " Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.671258 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-utilities" (OuterVolumeSpecName: "utilities") pod "2b9223d3-c96e-4967-9bb5-a877a5635e02" (UID: "2b9223d3-c96e-4967-9bb5-a877a5635e02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.671521 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.675750 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9223d3-c96e-4967-9bb5-a877a5635e02-kube-api-access-t4nhq" (OuterVolumeSpecName: "kube-api-access-t4nhq") pod "2b9223d3-c96e-4967-9bb5-a877a5635e02" (UID: "2b9223d3-c96e-4967-9bb5-a877a5635e02"). InnerVolumeSpecName "kube-api-access-t4nhq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.739105 5119 scope.go:117] "RemoveContainer" containerID="983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6" Jan 21 10:07:39 crc kubenswrapper[5119]: E0121 10:07:39.741991 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6\": container with ID starting with 983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6 not found: ID does not exist" containerID="983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.742045 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6"} err="failed to get container status \"983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6\": rpc error: code = NotFound desc = could not find container \"983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6\": container with ID starting with 983a394cc780e9ce6331a923c51abe88c838573d38482b5fd45a5f3055513ea6 not found: ID does not exist" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.742075 5119 scope.go:117] "RemoveContainer" containerID="c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5" Jan 21 10:07:39 crc kubenswrapper[5119]: E0121 10:07:39.751880 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5\": container with ID starting with c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5 not found: ID does not exist" containerID="c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.751936 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5"} err="failed to get container status \"c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5\": rpc error: code = NotFound desc = could not find container \"c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5\": container with ID starting with c1256d70eec6a3453470ee2f0df9f40bbdcebc7daa32b9ce8cfba702240a99c5 not found: ID does not exist" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.751975 5119 scope.go:117] "RemoveContainer" containerID="e7390156037cb15bf74270db62ae3bab0e2646206cc03874e47e2707e72b4ecb" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.753491 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b9223d3-c96e-4967-9bb5-a877a5635e02" (UID: "2b9223d3-c96e-4967-9bb5-a877a5635e02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:07:39 crc kubenswrapper[5119]: E0121 10:07:39.770828 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7390156037cb15bf74270db62ae3bab0e2646206cc03874e47e2707e72b4ecb\": container with ID starting with e7390156037cb15bf74270db62ae3bab0e2646206cc03874e47e2707e72b4ecb not found: ID does not exist" containerID="e7390156037cb15bf74270db62ae3bab0e2646206cc03874e47e2707e72b4ecb" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.770869 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7390156037cb15bf74270db62ae3bab0e2646206cc03874e47e2707e72b4ecb"} err="failed to get container status \"e7390156037cb15bf74270db62ae3bab0e2646206cc03874e47e2707e72b4ecb\": rpc error: code = NotFound desc = could not find container \"e7390156037cb15bf74270db62ae3bab0e2646206cc03874e47e2707e72b4ecb\": container with ID starting with e7390156037cb15bf74270db62ae3bab0e2646206cc03874e47e2707e72b4ecb not found: ID does not exist" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.772302 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9223d3-c96e-4967-9bb5-a877a5635e02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:39 crc kubenswrapper[5119]: I0121 10:07:39.772328 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t4nhq\" (UniqueName: \"kubernetes.io/projected/2b9223d3-c96e-4967-9bb5-a877a5635e02-kube-api-access-t4nhq\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.107082 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-87w9p"] Jan 21 10:07:40 crc kubenswrapper[5119]: W0121 10:07:40.108461 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode419bfea_ad6b_452c_a894_952a01ea8429.slice/crio-db027d30b6fa35d24641ca6b6a3febc4faaff29bd2636fcb663a8b1a7815dabb WatchSource:0}: Error finding container db027d30b6fa35d24641ca6b6a3febc4faaff29bd2636fcb663a8b1a7815dabb: Status 404 returned error can't find the container with id db027d30b6fa35d24641ca6b6a3febc4faaff29bd2636fcb663a8b1a7815dabb Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.163154 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkh5d"] Jan 21 10:07:40 crc kubenswrapper[5119]: W0121 10:07:40.170244 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f134055_6481_4b56_9172_cb137e1fefa7.slice/crio-0ff11aaf293aaf00085d37915011720d8369d4d98c78d3dfb24329f5ded73bc4 WatchSource:0}: Error finding container 0ff11aaf293aaf00085d37915011720d8369d4d98c78d3dfb24329f5ded73bc4: Status 404 returned error can't find the container with id 0ff11aaf293aaf00085d37915011720d8369d4d98c78d3dfb24329f5ded73bc4 Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.345358 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq" event={"ID":"d9416295-81e6-488c-b079-97d7ba7c4f3e","Type":"ContainerStarted","Data":"a680ca60a3f0067fee3bbde10dc4f2d74c38d97390ccc7f08f17d814af36ec72"} Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.346918 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" event={"ID":"9ab82c58-d623-4b22-aae4-4f8c744cb42d","Type":"ContainerStarted","Data":"12c035cfb35980174cc37e29fb39502b821ecb7cbcab7883b7d4e22c23c4b5e6"} Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.347083 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.350677 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t8lg" event={"ID":"2b9223d3-c96e-4967-9bb5-a877a5635e02","Type":"ContainerDied","Data":"6447725aa808262657fc48c3e95475b92a7f5b7e1e1803092845ce45d1aa8198"} Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.350717 5119 scope.go:117] "RemoveContainer" containerID="6b0db893a6daf0d05a8b0109da764ad19193de5a547ca9c32f48f24922c85a67" Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.350775 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t8lg" Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.352045 5119 generic.go:358] "Generic (PLEG): container finished" podID="3f134055-6481-4b56-9172-cb137e1fefa7" containerID="0dd7360699b8989a634ae07e25de645baaf1f6dc5c859b7d3dae46f4a4697161" exitCode=0 Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.352180 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkh5d" event={"ID":"3f134055-6481-4b56-9172-cb137e1fefa7","Type":"ContainerDied","Data":"0dd7360699b8989a634ae07e25de645baaf1f6dc5c859b7d3dae46f4a4697161"} Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.352209 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkh5d" event={"ID":"3f134055-6481-4b56-9172-cb137e1fefa7","Type":"ContainerStarted","Data":"0ff11aaf293aaf00085d37915011720d8369d4d98c78d3dfb24329f5ded73bc4"} Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.356997 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-87w9p" event={"ID":"e419bfea-ad6b-452c-a894-952a01ea8429","Type":"ContainerStarted","Data":"4c5b36d73c6be28b0ecbede57a718c70b0edd2fa4a2d82c999749485004f6b5d"} Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.357040 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-87w9p" event={"ID":"e419bfea-ad6b-452c-a894-952a01ea8429","Type":"ContainerStarted","Data":"db027d30b6fa35d24641ca6b6a3febc4faaff29bd2636fcb663a8b1a7815dabb"} Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.364313 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-vv5mq" podStartSLOduration=5.125760453 podStartE2EDuration="20.364300946s" podCreationTimestamp="2026-01-21 10:07:20 +0000 UTC" firstStartedPulling="2026-01-21 10:07:24.500548684 +0000 UTC m=+760.168640362" lastFinishedPulling="2026-01-21 10:07:39.739089177 +0000 UTC m=+775.407180855" observedRunningTime="2026-01-21 10:07:40.363113582 +0000 UTC m=+776.031205260" watchObservedRunningTime="2026-01-21 10:07:40.364300946 +0000 UTC m=+776.032392624" Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.378789 5119 scope.go:117] "RemoveContainer" containerID="aeae7b92d050f0394d7e15c35347e09ed2f7b212a6284574317532acec3e8d65" Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.406051 5119 scope.go:117] "RemoveContainer" containerID="afab34348795c31bc885ce7c875c9c2b54eb4832e99094268e80ebaebe21ce74" Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.453175 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-87w9p" podStartSLOduration=11.453158987 podStartE2EDuration="11.453158987s" podCreationTimestamp="2026-01-21 10:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:40.425280929 +0000 UTC m=+776.093372607" watchObservedRunningTime="2026-01-21 10:07:40.453158987 +0000 UTC m=+776.121250665" Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.474115 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" podStartSLOduration=2.519192764 podStartE2EDuration="21.474099258s" podCreationTimestamp="2026-01-21 10:07:19 +0000 UTC" firstStartedPulling="2026-01-21 10:07:20.722587763 +0000 UTC m=+756.390679451" lastFinishedPulling="2026-01-21 10:07:39.677494267 +0000 UTC m=+775.345585945" observedRunningTime="2026-01-21 10:07:40.456720577 +0000 UTC m=+776.124812255" watchObservedRunningTime="2026-01-21 10:07:40.474099258 +0000 UTC m=+776.142190936" Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.474466 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9t8lg"] Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.478421 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9t8lg"] Jan 21 10:07:40 crc kubenswrapper[5119]: I0121 10:07:40.597391 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9223d3-c96e-4967-9bb5-a877a5635e02" path="/var/lib/kubelet/pods/2b9223d3-c96e-4967-9bb5-a877a5635e02/volumes" Jan 21 10:07:42 crc kubenswrapper[5119]: I0121 10:07:42.373584 5119 generic.go:358] "Generic (PLEG): container finished" podID="3f134055-6481-4b56-9172-cb137e1fefa7" containerID="ba8d98886e0262ec71c17d71b760bc8237cc5ee473d875483d7036c494d769f0" exitCode=0 Jan 21 10:07:42 crc kubenswrapper[5119]: I0121 10:07:42.373741 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkh5d" event={"ID":"3f134055-6481-4b56-9172-cb137e1fefa7","Type":"ContainerDied","Data":"ba8d98886e0262ec71c17d71b760bc8237cc5ee473d875483d7036c494d769f0"} Jan 21 10:07:43 crc kubenswrapper[5119]: I0121 10:07:43.383567 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkh5d" event={"ID":"3f134055-6481-4b56-9172-cb137e1fefa7","Type":"ContainerStarted","Data":"4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b"} Jan 21 10:07:43 crc kubenswrapper[5119]: I0121 10:07:43.409346 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qkh5d" podStartSLOduration=12.565902822 podStartE2EDuration="13.409325325s" podCreationTimestamp="2026-01-21 10:07:30 +0000 UTC" firstStartedPulling="2026-01-21 10:07:40.352817092 +0000 UTC m=+776.020908770" lastFinishedPulling="2026-01-21 10:07:41.196239595 +0000 UTC m=+776.864331273" observedRunningTime="2026-01-21 10:07:43.408064209 +0000 UTC m=+779.076155897" watchObservedRunningTime="2026-01-21 10:07:43.409325325 +0000 UTC m=+779.077417003" Jan 21 10:07:46 crc kubenswrapper[5119]: I0121 10:07:46.368283 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-ps4d8" Jan 21 10:07:50 crc kubenswrapper[5119]: I0121 10:07:50.864049 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:50 crc kubenswrapper[5119]: I0121 10:07:50.866008 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:50 crc kubenswrapper[5119]: I0121 10:07:50.906854 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:51 crc kubenswrapper[5119]: I0121 10:07:51.475626 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:51 crc kubenswrapper[5119]: I0121 10:07:51.512958 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkh5d"] Jan 21 10:07:53 crc kubenswrapper[5119]: I0121 10:07:53.450163 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qkh5d" podUID="3f134055-6481-4b56-9172-cb137e1fefa7" containerName="registry-server" containerID="cri-o://4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b" gracePeriod=2 Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.038619 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.082619 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldkxn\" (UniqueName: \"kubernetes.io/projected/3f134055-6481-4b56-9172-cb137e1fefa7-kube-api-access-ldkxn\") pod \"3f134055-6481-4b56-9172-cb137e1fefa7\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.082680 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-catalog-content\") pod \"3f134055-6481-4b56-9172-cb137e1fefa7\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.082811 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-utilities\") pod \"3f134055-6481-4b56-9172-cb137e1fefa7\" (UID: \"3f134055-6481-4b56-9172-cb137e1fefa7\") " Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.084089 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-utilities" (OuterVolumeSpecName: "utilities") pod "3f134055-6481-4b56-9172-cb137e1fefa7" (UID: "3f134055-6481-4b56-9172-cb137e1fefa7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.087867 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f134055-6481-4b56-9172-cb137e1fefa7-kube-api-access-ldkxn" (OuterVolumeSpecName: "kube-api-access-ldkxn") pod "3f134055-6481-4b56-9172-cb137e1fefa7" (UID: "3f134055-6481-4b56-9172-cb137e1fefa7"). InnerVolumeSpecName "kube-api-access-ldkxn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.180226 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f134055-6481-4b56-9172-cb137e1fefa7" (UID: "3f134055-6481-4b56-9172-cb137e1fefa7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.183822 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ldkxn\" (UniqueName: \"kubernetes.io/projected/3f134055-6481-4b56-9172-cb137e1fefa7-kube-api-access-ldkxn\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.183844 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.183852 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f134055-6481-4b56-9172-cb137e1fefa7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.463950 5119 generic.go:358] "Generic (PLEG): container finished" podID="3f134055-6481-4b56-9172-cb137e1fefa7" containerID="4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b" exitCode=0 Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.464043 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkh5d" event={"ID":"3f134055-6481-4b56-9172-cb137e1fefa7","Type":"ContainerDied","Data":"4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b"} Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.464092 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkh5d" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.464124 5119 scope.go:117] "RemoveContainer" containerID="4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.464105 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkh5d" event={"ID":"3f134055-6481-4b56-9172-cb137e1fefa7","Type":"ContainerDied","Data":"0ff11aaf293aaf00085d37915011720d8369d4d98c78d3dfb24329f5ded73bc4"} Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.482154 5119 scope.go:117] "RemoveContainer" containerID="ba8d98886e0262ec71c17d71b760bc8237cc5ee473d875483d7036c494d769f0" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.531000 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkh5d"] Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.534536 5119 scope.go:117] "RemoveContainer" containerID="0dd7360699b8989a634ae07e25de645baaf1f6dc5c859b7d3dae46f4a4697161" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.534976 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qkh5d"] Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.567117 5119 scope.go:117] "RemoveContainer" containerID="4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b" Jan 21 10:07:55 crc kubenswrapper[5119]: E0121 10:07:55.568688 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b\": container with ID starting with 4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b not found: ID does not exist" containerID="4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.568774 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b"} err="failed to get container status \"4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b\": rpc error: code = NotFound desc = could not find container \"4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b\": container with ID starting with 4e55d8e0927fa7f7af44665b0a46e9a4f8472f284cb51a5b6d43e4f27eeb5a8b not found: ID does not exist" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.568826 5119 scope.go:117] "RemoveContainer" containerID="ba8d98886e0262ec71c17d71b760bc8237cc5ee473d875483d7036c494d769f0" Jan 21 10:07:55 crc kubenswrapper[5119]: E0121 10:07:55.569759 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8d98886e0262ec71c17d71b760bc8237cc5ee473d875483d7036c494d769f0\": container with ID starting with ba8d98886e0262ec71c17d71b760bc8237cc5ee473d875483d7036c494d769f0 not found: ID does not exist" containerID="ba8d98886e0262ec71c17d71b760bc8237cc5ee473d875483d7036c494d769f0" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.569822 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8d98886e0262ec71c17d71b760bc8237cc5ee473d875483d7036c494d769f0"} err="failed to get container status \"ba8d98886e0262ec71c17d71b760bc8237cc5ee473d875483d7036c494d769f0\": rpc error: code = NotFound desc = could not find container \"ba8d98886e0262ec71c17d71b760bc8237cc5ee473d875483d7036c494d769f0\": container with ID starting with ba8d98886e0262ec71c17d71b760bc8237cc5ee473d875483d7036c494d769f0 not found: ID does not exist" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.569928 5119 scope.go:117] "RemoveContainer" containerID="0dd7360699b8989a634ae07e25de645baaf1f6dc5c859b7d3dae46f4a4697161" Jan 21 10:07:55 crc kubenswrapper[5119]: E0121 10:07:55.571038 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dd7360699b8989a634ae07e25de645baaf1f6dc5c859b7d3dae46f4a4697161\": container with ID starting with 0dd7360699b8989a634ae07e25de645baaf1f6dc5c859b7d3dae46f4a4697161 not found: ID does not exist" containerID="0dd7360699b8989a634ae07e25de645baaf1f6dc5c859b7d3dae46f4a4697161" Jan 21 10:07:55 crc kubenswrapper[5119]: I0121 10:07:55.571094 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dd7360699b8989a634ae07e25de645baaf1f6dc5c859b7d3dae46f4a4697161"} err="failed to get container status \"0dd7360699b8989a634ae07e25de645baaf1f6dc5c859b7d3dae46f4a4697161\": rpc error: code = NotFound desc = could not find container \"0dd7360699b8989a634ae07e25de645baaf1f6dc5c859b7d3dae46f4a4697161\": container with ID starting with 0dd7360699b8989a634ae07e25de645baaf1f6dc5c859b7d3dae46f4a4697161 not found: ID does not exist" Jan 21 10:07:56 crc kubenswrapper[5119]: I0121 10:07:56.609580 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f134055-6481-4b56-9172-cb137e1fefa7" path="/var/lib/kubelet/pods/3f134055-6481-4b56-9172-cb137e1fefa7/volumes" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.149105 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483168-6b2lx"] Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150442 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f134055-6481-4b56-9172-cb137e1fefa7" containerName="registry-server" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150473 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f134055-6481-4b56-9172-cb137e1fefa7" containerName="registry-server" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150501 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2b9223d3-c96e-4967-9bb5-a877a5635e02" containerName="extract-content" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150513 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9223d3-c96e-4967-9bb5-a877a5635e02" containerName="extract-content" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150535 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2b9223d3-c96e-4967-9bb5-a877a5635e02" containerName="registry-server" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150548 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9223d3-c96e-4967-9bb5-a877a5635e02" containerName="registry-server" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150567 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f134055-6481-4b56-9172-cb137e1fefa7" containerName="extract-utilities" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150578 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f134055-6481-4b56-9172-cb137e1fefa7" containerName="extract-utilities" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150597 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2b9223d3-c96e-4967-9bb5-a877a5635e02" containerName="extract-utilities" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150633 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9223d3-c96e-4967-9bb5-a877a5635e02" containerName="extract-utilities" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150668 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f134055-6481-4b56-9172-cb137e1fefa7" containerName="extract-content" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150679 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f134055-6481-4b56-9172-cb137e1fefa7" containerName="extract-content" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150876 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f134055-6481-4b56-9172-cb137e1fefa7" containerName="registry-server" Jan 21 10:08:00 crc kubenswrapper[5119]: I0121 10:08:00.150900 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="2b9223d3-c96e-4967-9bb5-a877a5635e02" containerName="registry-server" Jan 21 10:08:01 crc kubenswrapper[5119]: I0121 10:08:01.042262 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483168-6b2lx" Jan 21 10:08:01 crc kubenswrapper[5119]: I0121 10:08:01.045663 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:08:01 crc kubenswrapper[5119]: I0121 10:08:01.046968 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:08:01 crc kubenswrapper[5119]: I0121 10:08:01.047871 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:08:01 crc kubenswrapper[5119]: I0121 10:08:01.061143 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483168-6b2lx"] Jan 21 10:08:01 crc kubenswrapper[5119]: I0121 10:08:01.180183 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpmg5\" (UniqueName: \"kubernetes.io/projected/5418d84c-9649-4f00-ae33-e55a33c042ff-kube-api-access-vpmg5\") pod \"auto-csr-approver-29483168-6b2lx\" (UID: \"5418d84c-9649-4f00-ae33-e55a33c042ff\") " pod="openshift-infra/auto-csr-approver-29483168-6b2lx" Jan 21 10:08:01 crc kubenswrapper[5119]: I0121 10:08:01.282151 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vpmg5\" (UniqueName: \"kubernetes.io/projected/5418d84c-9649-4f00-ae33-e55a33c042ff-kube-api-access-vpmg5\") pod \"auto-csr-approver-29483168-6b2lx\" (UID: \"5418d84c-9649-4f00-ae33-e55a33c042ff\") " pod="openshift-infra/auto-csr-approver-29483168-6b2lx" Jan 21 10:08:01 crc kubenswrapper[5119]: I0121 10:08:01.304878 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpmg5\" (UniqueName: \"kubernetes.io/projected/5418d84c-9649-4f00-ae33-e55a33c042ff-kube-api-access-vpmg5\") pod \"auto-csr-approver-29483168-6b2lx\" (UID: \"5418d84c-9649-4f00-ae33-e55a33c042ff\") " pod="openshift-infra/auto-csr-approver-29483168-6b2lx" Jan 21 10:08:01 crc kubenswrapper[5119]: I0121 10:08:01.374730 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483168-6b2lx" Jan 21 10:08:01 crc kubenswrapper[5119]: W0121 10:08:01.839453 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5418d84c_9649_4f00_ae33_e55a33c042ff.slice/crio-c3602ba50eef75ce4f8679d7994df005be43a6a16cab0e889931237822fe556b WatchSource:0}: Error finding container c3602ba50eef75ce4f8679d7994df005be43a6a16cab0e889931237822fe556b: Status 404 returned error can't find the container with id c3602ba50eef75ce4f8679d7994df005be43a6a16cab0e889931237822fe556b Jan 21 10:08:01 crc kubenswrapper[5119]: I0121 10:08:01.850588 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483168-6b2lx"] Jan 21 10:08:02 crc kubenswrapper[5119]: I0121 10:08:02.514883 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483168-6b2lx" event={"ID":"5418d84c-9649-4f00-ae33-e55a33c042ff","Type":"ContainerStarted","Data":"c3602ba50eef75ce4f8679d7994df005be43a6a16cab0e889931237822fe556b"} Jan 21 10:08:03 crc kubenswrapper[5119]: I0121 10:08:03.525033 5119 generic.go:358] "Generic (PLEG): container finished" podID="5418d84c-9649-4f00-ae33-e55a33c042ff" containerID="296eeb23a2bfc08eee248f9406806426948a4cbe233f7112f6438c9e6086a468" exitCode=0 Jan 21 10:08:03 crc kubenswrapper[5119]: I0121 10:08:03.525137 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483168-6b2lx" event={"ID":"5418d84c-9649-4f00-ae33-e55a33c042ff","Type":"ContainerDied","Data":"296eeb23a2bfc08eee248f9406806426948a4cbe233f7112f6438c9e6086a468"} Jan 21 10:08:04 crc kubenswrapper[5119]: I0121 10:08:04.783464 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483168-6b2lx" Jan 21 10:08:04 crc kubenswrapper[5119]: I0121 10:08:04.933473 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpmg5\" (UniqueName: \"kubernetes.io/projected/5418d84c-9649-4f00-ae33-e55a33c042ff-kube-api-access-vpmg5\") pod \"5418d84c-9649-4f00-ae33-e55a33c042ff\" (UID: \"5418d84c-9649-4f00-ae33-e55a33c042ff\") " Jan 21 10:08:04 crc kubenswrapper[5119]: I0121 10:08:04.953953 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5418d84c-9649-4f00-ae33-e55a33c042ff-kube-api-access-vpmg5" (OuterVolumeSpecName: "kube-api-access-vpmg5") pod "5418d84c-9649-4f00-ae33-e55a33c042ff" (UID: "5418d84c-9649-4f00-ae33-e55a33c042ff"). InnerVolumeSpecName "kube-api-access-vpmg5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:08:05 crc kubenswrapper[5119]: I0121 10:08:05.035566 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vpmg5\" (UniqueName: \"kubernetes.io/projected/5418d84c-9649-4f00-ae33-e55a33c042ff-kube-api-access-vpmg5\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:05 crc kubenswrapper[5119]: I0121 10:08:05.539859 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483168-6b2lx" event={"ID":"5418d84c-9649-4f00-ae33-e55a33c042ff","Type":"ContainerDied","Data":"c3602ba50eef75ce4f8679d7994df005be43a6a16cab0e889931237822fe556b"} Jan 21 10:08:05 crc kubenswrapper[5119]: I0121 10:08:05.539902 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3602ba50eef75ce4f8679d7994df005be43a6a16cab0e889931237822fe556b" Jan 21 10:08:05 crc kubenswrapper[5119]: I0121 10:08:05.539926 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483168-6b2lx" Jan 21 10:08:05 crc kubenswrapper[5119]: I0121 10:08:05.848715 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483162-pzxwx"] Jan 21 10:08:05 crc kubenswrapper[5119]: I0121 10:08:05.852101 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483162-pzxwx"] Jan 21 10:08:06 crc kubenswrapper[5119]: I0121 10:08:06.597409 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d98510c-550d-49f1-a9f2-e7457a41988d" path="/var/lib/kubelet/pods/5d98510c-550d-49f1-a9f2-e7457a41988d/volumes" Jan 21 10:08:47 crc kubenswrapper[5119]: I0121 10:08:47.068805 5119 scope.go:117] "RemoveContainer" containerID="ce6d3a7102ca8fdadb65c797d828d03c7bf0cd84dfea82c4338daf6b938cfd95" Jan 21 10:08:51 crc kubenswrapper[5119]: I0121 10:08:51.854717 5119 generic.go:358] "Generic (PLEG): container finished" podID="568d37ef-0166-4215-b0c1-ed9c9db7a3a1" containerID="5f77e5835cb06f4b41bae3f3cc6f9a85d3dd6b2ee8a301462b66353c7ba1466b" exitCode=0 Jan 21 10:08:51 crc kubenswrapper[5119]: I0121 10:08:51.854818 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"568d37ef-0166-4215-b0c1-ed9c9db7a3a1","Type":"ContainerDied","Data":"5f77e5835cb06f4b41bae3f3cc6f9a85d3dd6b2ee8a301462b66353c7ba1466b"} Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.123774 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.189921 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-system-configs\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.189993 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-root\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.190033 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdv4g\" (UniqueName: \"kubernetes.io/projected/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-kube-api-access-fdv4g\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.190060 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildworkdir\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.190148 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-proxy-ca-bundles\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.190757 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.190824 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.191889 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-run\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.191951 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-pull\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.192030 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-ca-bundles\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.192111 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildcachedir\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.192145 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-push\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.192182 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.192242 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-blob-cache\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.192288 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-node-pullsecrets\") pod \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\" (UID: \"568d37ef-0166-4215-b0c1-ed9c9db7a3a1\") " Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.192508 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.192791 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.192805 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.192815 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.192826 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.193149 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.193454 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.196163 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-kube-api-access-fdv4g" (OuterVolumeSpecName: "kube-api-access-fdv4g") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "kube-api-access-fdv4g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.196736 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.196773 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.226255 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.294142 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.294188 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.294205 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.294220 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.294232 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fdv4g\" (UniqueName: \"kubernetes.io/projected/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-kube-api-access-fdv4g\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.294243 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.384390 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.395263 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.872091 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.872113 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"568d37ef-0166-4215-b0c1-ed9c9db7a3a1","Type":"ContainerDied","Data":"8d780bed7dc37005364ba5d012e7ce6027ff63bc8ef6f8b135e1058ed3390c29"} Jan 21 10:08:53 crc kubenswrapper[5119]: I0121 10:08:53.872164 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d780bed7dc37005364ba5d012e7ce6027ff63bc8ef6f8b135e1058ed3390c29" Jan 21 10:08:55 crc kubenswrapper[5119]: I0121 10:08:55.072872 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "568d37ef-0166-4215-b0c1-ed9c9db7a3a1" (UID: "568d37ef-0166-4215-b0c1-ed9c9db7a3a1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:08:55 crc kubenswrapper[5119]: I0121 10:08:55.119175 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/568d37ef-0166-4215-b0c1-ed9c9db7a3a1-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.438106 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.438864 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5418d84c-9649-4f00-ae33-e55a33c042ff" containerName="oc" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.438881 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="5418d84c-9649-4f00-ae33-e55a33c042ff" containerName="oc" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.438904 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="568d37ef-0166-4215-b0c1-ed9c9db7a3a1" containerName="git-clone" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.438912 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="568d37ef-0166-4215-b0c1-ed9c9db7a3a1" containerName="git-clone" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.438933 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="568d37ef-0166-4215-b0c1-ed9c9db7a3a1" containerName="docker-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.438942 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="568d37ef-0166-4215-b0c1-ed9c9db7a3a1" containerName="docker-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.438970 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="568d37ef-0166-4215-b0c1-ed9c9db7a3a1" containerName="manage-dockerfile" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.438978 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="568d37ef-0166-4215-b0c1-ed9c9db7a3a1" containerName="manage-dockerfile" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.439098 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="568d37ef-0166-4215-b0c1-ed9c9db7a3a1" containerName="docker-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.439115 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="5418d84c-9649-4f00-ae33-e55a33c042ff" containerName="oc" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.442746 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.444883 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-ca\"" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.445497 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-global-ca\"" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.445945 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-sys-config\"" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.446181 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.460666 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.552763 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.553095 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.553231 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm8df\" (UniqueName: \"kubernetes.io/projected/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-kube-api-access-cm8df\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.553448 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.553566 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.553710 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.553850 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.554070 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.554218 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.554364 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.554478 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.554678 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.656266 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.656702 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.656787 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.656852 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.656917 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.656968 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.657015 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cm8df\" (UniqueName: \"kubernetes.io/projected/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-kube-api-access-cm8df\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.657053 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.657098 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.657140 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.657176 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.657253 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.657332 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.657656 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.657250 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.657838 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.657882 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.658015 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.658378 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.658698 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.658799 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.666866 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.667219 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.676448 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm8df\" (UniqueName: \"kubernetes.io/projected/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-kube-api-access-cm8df\") pod \"smart-gateway-operator-1-build\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:57 crc kubenswrapper[5119]: I0121 10:08:57.760708 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:08:58 crc kubenswrapper[5119]: I0121 10:08:58.150721 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 10:08:58 crc kubenswrapper[5119]: I0121 10:08:58.906355 5119 generic.go:358] "Generic (PLEG): container finished" podID="54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" containerID="2194342fad735e14c96f89d0b98f45673ac2cb495697b43d92fed4dd4f1580b6" exitCode=0 Jan 21 10:08:58 crc kubenswrapper[5119]: I0121 10:08:58.906449 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239","Type":"ContainerDied","Data":"2194342fad735e14c96f89d0b98f45673ac2cb495697b43d92fed4dd4f1580b6"} Jan 21 10:08:58 crc kubenswrapper[5119]: I0121 10:08:58.906747 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239","Type":"ContainerStarted","Data":"6844bdd6afc8cc691f7f8bd1e99602f81963b3b463e876bf6fc1d50830c2fa9c"} Jan 21 10:08:59 crc kubenswrapper[5119]: I0121 10:08:59.918669 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239","Type":"ContainerStarted","Data":"870fbe4dd60618e4bb553f9c42f4daf82bfae2ae327446fb3a46929458163c14"} Jan 21 10:08:59 crc kubenswrapper[5119]: I0121 10:08:59.942999 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=2.942976754 podStartE2EDuration="2.942976754s" podCreationTimestamp="2026-01-21 10:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:59.939118118 +0000 UTC m=+855.607209806" watchObservedRunningTime="2026-01-21 10:08:59.942976754 +0000 UTC m=+855.611068432" Jan 21 10:09:08 crc kubenswrapper[5119]: I0121 10:09:08.023011 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 10:09:08 crc kubenswrapper[5119]: I0121 10:09:08.025694 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" containerName="docker-build" containerID="cri-o://870fbe4dd60618e4bb553f9c42f4daf82bfae2ae327446fb3a46929458163c14" gracePeriod=30 Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.667586 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.903298 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.903496 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.907333 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-sys-config\"" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.907356 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-global-ca\"" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.910066 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-ca\"" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939196 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939368 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939430 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939526 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939578 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939661 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939731 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939771 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939803 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939846 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939911 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:09 crc kubenswrapper[5119]: I0121 10:09:09.939989 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtxn5\" (UniqueName: \"kubernetes.io/projected/917a0b38-23e4-466d-8e05-434245795a3e-kube-api-access-rtxn5\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.041258 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.041318 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.041343 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.041370 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.041398 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.041429 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rtxn5\" (UniqueName: \"kubernetes.io/projected/917a0b38-23e4-466d-8e05-434245795a3e-kube-api-access-rtxn5\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.041437 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.041617 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.041905 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.041978 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.042017 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.042035 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.042070 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.042128 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.042460 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.042555 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.042675 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.041904 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.042757 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.042848 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.043218 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.049194 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.051152 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.058585 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtxn5\" (UniqueName: \"kubernetes.io/projected/917a0b38-23e4-466d-8e05-434245795a3e-kube-api-access-rtxn5\") pod \"smart-gateway-operator-2-build\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.227524 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:09:10 crc kubenswrapper[5119]: I0121 10:09:10.686468 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.016128 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239/docker-build/0.log" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.017134 5119 generic.go:358] "Generic (PLEG): container finished" podID="54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" containerID="870fbe4dd60618e4bb553f9c42f4daf82bfae2ae327446fb3a46929458163c14" exitCode=1 Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.017224 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239","Type":"ContainerDied","Data":"870fbe4dd60618e4bb553f9c42f4daf82bfae2ae327446fb3a46929458163c14"} Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.020361 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"917a0b38-23e4-466d-8e05-434245795a3e","Type":"ContainerStarted","Data":"db18563928c5ef72b13ac1e12f760826caffdc2ad05704231eea1a0b32e06e6b"} Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.293889 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239/docker-build/0.log" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.294270 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362278 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-proxy-ca-bundles\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362317 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-root\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362344 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildcachedir\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362373 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm8df\" (UniqueName: \"kubernetes.io/projected/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-kube-api-access-cm8df\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362388 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-ca-bundles\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362416 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-push\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362447 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-system-configs\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362473 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildworkdir\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362489 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-node-pullsecrets\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362530 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-run\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362548 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-pull\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362566 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-blob-cache\") pod \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\" (UID: \"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239\") " Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362908 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.362942 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.363379 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.363532 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.363553 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.363618 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.364207 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.365005 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.367993 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.368158 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.372356 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-kube-api-access-cm8df" (OuterVolumeSpecName: "kube-api-access-cm8df") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "kube-api-access-cm8df". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.463388 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.463417 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.463428 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.463435 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cm8df\" (UniqueName: \"kubernetes.io/projected/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-kube-api-access-cm8df\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.463444 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.463452 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.463459 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.463467 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.463474 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.463483 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.463490 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.499644 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" (UID: "54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:09:11 crc kubenswrapper[5119]: I0121 10:09:11.565053 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:12 crc kubenswrapper[5119]: I0121 10:09:12.027747 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239/docker-build/0.log" Jan 21 10:09:12 crc kubenswrapper[5119]: I0121 10:09:12.028340 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 10:09:12 crc kubenswrapper[5119]: I0121 10:09:12.028361 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239","Type":"ContainerDied","Data":"6844bdd6afc8cc691f7f8bd1e99602f81963b3b463e876bf6fc1d50830c2fa9c"} Jan 21 10:09:12 crc kubenswrapper[5119]: I0121 10:09:12.028422 5119 scope.go:117] "RemoveContainer" containerID="870fbe4dd60618e4bb553f9c42f4daf82bfae2ae327446fb3a46929458163c14" Jan 21 10:09:12 crc kubenswrapper[5119]: I0121 10:09:12.030789 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"917a0b38-23e4-466d-8e05-434245795a3e","Type":"ContainerStarted","Data":"65318e7c15e154677ece5e8f27a240b15aada3928a8ec8a5a5b37d3d7bab8bbb"} Jan 21 10:09:12 crc kubenswrapper[5119]: I0121 10:09:12.090033 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 10:09:12 crc kubenswrapper[5119]: I0121 10:09:12.093281 5119 scope.go:117] "RemoveContainer" containerID="2194342fad735e14c96f89d0b98f45673ac2cb495697b43d92fed4dd4f1580b6" Jan 21 10:09:12 crc kubenswrapper[5119]: I0121 10:09:12.096449 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 10:09:12 crc kubenswrapper[5119]: I0121 10:09:12.600046 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" path="/var/lib/kubelet/pods/54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239/volumes" Jan 21 10:09:13 crc kubenswrapper[5119]: I0121 10:09:13.036650 5119 generic.go:358] "Generic (PLEG): container finished" podID="917a0b38-23e4-466d-8e05-434245795a3e" containerID="65318e7c15e154677ece5e8f27a240b15aada3928a8ec8a5a5b37d3d7bab8bbb" exitCode=0 Jan 21 10:09:13 crc kubenswrapper[5119]: I0121 10:09:13.036729 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"917a0b38-23e4-466d-8e05-434245795a3e","Type":"ContainerDied","Data":"65318e7c15e154677ece5e8f27a240b15aada3928a8ec8a5a5b37d3d7bab8bbb"} Jan 21 10:09:14 crc kubenswrapper[5119]: I0121 10:09:14.047294 5119 generic.go:358] "Generic (PLEG): container finished" podID="917a0b38-23e4-466d-8e05-434245795a3e" containerID="b4425b2eefb7b3bc96a0ff61506cc7b95f6082bfd1f89228db1f6d7376db0e67" exitCode=0 Jan 21 10:09:14 crc kubenswrapper[5119]: I0121 10:09:14.047399 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"917a0b38-23e4-466d-8e05-434245795a3e","Type":"ContainerDied","Data":"b4425b2eefb7b3bc96a0ff61506cc7b95f6082bfd1f89228db1f6d7376db0e67"} Jan 21 10:09:14 crc kubenswrapper[5119]: I0121 10:09:14.080452 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_917a0b38-23e4-466d-8e05-434245795a3e/manage-dockerfile/0.log" Jan 21 10:09:15 crc kubenswrapper[5119]: I0121 10:09:15.057262 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"917a0b38-23e4-466d-8e05-434245795a3e","Type":"ContainerStarted","Data":"a620122b15d295bd42dc124686baaf79d684027d7c37b1b4651a30263ef2753e"} Jan 21 10:09:19 crc kubenswrapper[5119]: I0121 10:09:19.918525 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:09:19 crc kubenswrapper[5119]: I0121 10:09:19.919064 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:09:45 crc kubenswrapper[5119]: I0121 10:09:45.142908 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:09:45 crc kubenswrapper[5119]: I0121 10:09:45.147233 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:09:45 crc kubenswrapper[5119]: I0121 10:09:45.221730 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:09:45 crc kubenswrapper[5119]: I0121 10:09:45.225065 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:09:49 crc kubenswrapper[5119]: I0121 10:09:49.919334 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:09:49 crc kubenswrapper[5119]: I0121 10:09:49.919883 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.132556 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=51.132535706 podStartE2EDuration="51.132535706s" podCreationTimestamp="2026-01-21 10:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:09:15.082833616 +0000 UTC m=+870.750925304" watchObservedRunningTime="2026-01-21 10:10:00.132535706 +0000 UTC m=+915.800627404" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.136037 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483170-g2th8"] Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.136889 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" containerName="docker-build" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.136914 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" containerName="docker-build" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.136945 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" containerName="manage-dockerfile" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.136956 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" containerName="manage-dockerfile" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.137105 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="54b9ab5a-a1f1-4f9b-9bd1-253ff2a72239" containerName="docker-build" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.146216 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483170-g2th8"] Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.146332 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483170-g2th8" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.148886 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.149193 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.149756 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.295852 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgkh4\" (UniqueName: \"kubernetes.io/projected/8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873-kube-api-access-pgkh4\") pod \"auto-csr-approver-29483170-g2th8\" (UID: \"8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873\") " pod="openshift-infra/auto-csr-approver-29483170-g2th8" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.397723 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pgkh4\" (UniqueName: \"kubernetes.io/projected/8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873-kube-api-access-pgkh4\") pod \"auto-csr-approver-29483170-g2th8\" (UID: \"8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873\") " pod="openshift-infra/auto-csr-approver-29483170-g2th8" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.423270 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgkh4\" (UniqueName: \"kubernetes.io/projected/8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873-kube-api-access-pgkh4\") pod \"auto-csr-approver-29483170-g2th8\" (UID: \"8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873\") " pod="openshift-infra/auto-csr-approver-29483170-g2th8" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.471044 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483170-g2th8" Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.674202 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483170-g2th8"] Jan 21 10:10:00 crc kubenswrapper[5119]: I0121 10:10:00.681646 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:10:01 crc kubenswrapper[5119]: I0121 10:10:01.344489 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483170-g2th8" event={"ID":"8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873","Type":"ContainerStarted","Data":"915e752e050c3b267aa1e9c358f670e8ac06e2c6abd933691604d50bab7a52d1"} Jan 21 10:10:04 crc kubenswrapper[5119]: I0121 10:10:04.378989 5119 generic.go:358] "Generic (PLEG): container finished" podID="8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873" containerID="e5a92ef9fec23f592cd3391f23b5319cd41d024ae5accd50aabc2c556ddcb006" exitCode=0 Jan 21 10:10:04 crc kubenswrapper[5119]: I0121 10:10:04.379048 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483170-g2th8" event={"ID":"8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873","Type":"ContainerDied","Data":"e5a92ef9fec23f592cd3391f23b5319cd41d024ae5accd50aabc2c556ddcb006"} Jan 21 10:10:05 crc kubenswrapper[5119]: I0121 10:10:05.705726 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483170-g2th8" Jan 21 10:10:05 crc kubenswrapper[5119]: I0121 10:10:05.871091 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgkh4\" (UniqueName: \"kubernetes.io/projected/8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873-kube-api-access-pgkh4\") pod \"8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873\" (UID: \"8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873\") " Jan 21 10:10:05 crc kubenswrapper[5119]: I0121 10:10:05.876474 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873-kube-api-access-pgkh4" (OuterVolumeSpecName: "kube-api-access-pgkh4") pod "8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873" (UID: "8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873"). InnerVolumeSpecName "kube-api-access-pgkh4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:10:05 crc kubenswrapper[5119]: I0121 10:10:05.972586 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgkh4\" (UniqueName: \"kubernetes.io/projected/8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873-kube-api-access-pgkh4\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:06 crc kubenswrapper[5119]: I0121 10:10:06.392642 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483170-g2th8" event={"ID":"8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873","Type":"ContainerDied","Data":"915e752e050c3b267aa1e9c358f670e8ac06e2c6abd933691604d50bab7a52d1"} Jan 21 10:10:06 crc kubenswrapper[5119]: I0121 10:10:06.392685 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="915e752e050c3b267aa1e9c358f670e8ac06e2c6abd933691604d50bab7a52d1" Jan 21 10:10:06 crc kubenswrapper[5119]: I0121 10:10:06.392701 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483170-g2th8" Jan 21 10:10:06 crc kubenswrapper[5119]: I0121 10:10:06.764354 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483164-2hfvk"] Jan 21 10:10:06 crc kubenswrapper[5119]: I0121 10:10:06.769593 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483164-2hfvk"] Jan 21 10:10:08 crc kubenswrapper[5119]: I0121 10:10:08.598031 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acfe51cb-4322-420b-bbbb-de502ae4c2f6" path="/var/lib/kubelet/pods/acfe51cb-4322-420b-bbbb-de502ae4c2f6/volumes" Jan 21 10:10:19 crc kubenswrapper[5119]: I0121 10:10:19.489217 5119 generic.go:358] "Generic (PLEG): container finished" podID="917a0b38-23e4-466d-8e05-434245795a3e" containerID="a620122b15d295bd42dc124686baaf79d684027d7c37b1b4651a30263ef2753e" exitCode=0 Jan 21 10:10:19 crc kubenswrapper[5119]: I0121 10:10:19.489472 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"917a0b38-23e4-466d-8e05-434245795a3e","Type":"ContainerDied","Data":"a620122b15d295bd42dc124686baaf79d684027d7c37b1b4651a30263ef2753e"} Jan 21 10:10:19 crc kubenswrapper[5119]: I0121 10:10:19.919347 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:10:19 crc kubenswrapper[5119]: I0121 10:10:19.919494 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:10:19 crc kubenswrapper[5119]: I0121 10:10:19.919574 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:10:19 crc kubenswrapper[5119]: I0121 10:10:19.920719 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90484e72f46c3fbcf88c1033b1658a1d68108d21cb7c6bed596a53764123001b"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:10:19 crc kubenswrapper[5119]: I0121 10:10:19.920836 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://90484e72f46c3fbcf88c1033b1658a1d68108d21cb7c6bed596a53764123001b" gracePeriod=600 Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.499201 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="90484e72f46c3fbcf88c1033b1658a1d68108d21cb7c6bed596a53764123001b" exitCode=0 Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.499291 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"90484e72f46c3fbcf88c1033b1658a1d68108d21cb7c6bed596a53764123001b"} Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.499586 5119 scope.go:117] "RemoveContainer" containerID="0fcd4683c1f89fdf153473f37b8eee4ecff9e78a85c4e6b63e9902ab31d6f4a8" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.739335 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.784319 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-buildworkdir\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.784688 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-buildcachedir\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.784821 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-run\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.784919 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtxn5\" (UniqueName: \"kubernetes.io/projected/917a0b38-23e4-466d-8e05-434245795a3e-kube-api-access-rtxn5\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.784813 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.785047 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-system-configs\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.785147 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-push\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.785239 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-root\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.785389 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.785412 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-proxy-ca-bundles\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.785485 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-build-blob-cache\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.785550 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-node-pullsecrets\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.785722 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-ca-bundles\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.785764 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-pull\") pod \"917a0b38-23e4-466d-8e05-434245795a3e\" (UID: \"917a0b38-23e4-466d-8e05-434245795a3e\") " Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.785794 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.786085 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.786303 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.786389 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.786450 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.786476 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.786491 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.786502 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/917a0b38-23e4-466d-8e05-434245795a3e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.790922 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.793024 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.794256 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917a0b38-23e4-466d-8e05-434245795a3e-kube-api-access-rtxn5" (OuterVolumeSpecName: "kube-api-access-rtxn5") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "kube-api-access-rtxn5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.815134 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.888405 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.888468 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.888489 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.888504 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rtxn5\" (UniqueName: \"kubernetes.io/projected/917a0b38-23e4-466d-8e05-434245795a3e-kube-api-access-rtxn5\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.888522 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/917a0b38-23e4-466d-8e05-434245795a3e-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:20 crc kubenswrapper[5119]: I0121 10:10:20.888537 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/917a0b38-23e4-466d-8e05-434245795a3e-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:21 crc kubenswrapper[5119]: I0121 10:10:21.005859 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:10:21 crc kubenswrapper[5119]: I0121 10:10:21.090519 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:21 crc kubenswrapper[5119]: I0121 10:10:21.528678 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"639f312973ca7ed8a8e84c76403e2b53399a57adb4ec6e14a566fd4af25f3c2b"} Jan 21 10:10:21 crc kubenswrapper[5119]: I0121 10:10:21.543480 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"917a0b38-23e4-466d-8e05-434245795a3e","Type":"ContainerDied","Data":"db18563928c5ef72b13ac1e12f760826caffdc2ad05704231eea1a0b32e06e6b"} Jan 21 10:10:21 crc kubenswrapper[5119]: I0121 10:10:21.543587 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db18563928c5ef72b13ac1e12f760826caffdc2ad05704231eea1a0b32e06e6b" Jan 21 10:10:21 crc kubenswrapper[5119]: I0121 10:10:21.543702 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 10:10:22 crc kubenswrapper[5119]: I0121 10:10:22.562700 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "917a0b38-23e4-466d-8e05-434245795a3e" (UID: "917a0b38-23e4-466d-8e05-434245795a3e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:10:22 crc kubenswrapper[5119]: I0121 10:10:22.613469 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/917a0b38-23e4-466d-8e05-434245795a3e-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.538214 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.539297 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="917a0b38-23e4-466d-8e05-434245795a3e" containerName="git-clone" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.539309 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="917a0b38-23e4-466d-8e05-434245795a3e" containerName="git-clone" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.539320 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="917a0b38-23e4-466d-8e05-434245795a3e" containerName="manage-dockerfile" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.539328 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="917a0b38-23e4-466d-8e05-434245795a3e" containerName="manage-dockerfile" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.539339 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873" containerName="oc" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.539345 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873" containerName="oc" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.539372 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="917a0b38-23e4-466d-8e05-434245795a3e" containerName="docker-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.539377 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="917a0b38-23e4-466d-8e05-434245795a3e" containerName="docker-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.539462 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873" containerName="oc" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.539471 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="917a0b38-23e4-466d-8e05-434245795a3e" containerName="docker-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.609990 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.610377 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.613740 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-ca\"" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.613766 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-sys-config\"" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.613813 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-global-ca\"" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.613828 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.753626 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-push\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.753687 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-pull\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.753705 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildworkdir\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.753723 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-system-configs\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.753750 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-run\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.754009 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.754085 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildcachedir\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.754128 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.754216 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.754279 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.754361 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmgnd\" (UniqueName: \"kubernetes.io/projected/ed72636d-17c7-4a75-a4c2-ee3c07e64541-kube-api-access-cmgnd\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.754426 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-root\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855372 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildcachedir\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855436 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855461 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855478 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855505 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cmgnd\" (UniqueName: \"kubernetes.io/projected/ed72636d-17c7-4a75-a4c2-ee3c07e64541-kube-api-access-cmgnd\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855522 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-root\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855525 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildcachedir\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855568 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-push\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855591 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-pull\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855763 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildworkdir\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855821 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-system-configs\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855867 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-run\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.855994 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.856292 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-root\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.856420 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildworkdir\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.856466 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.856592 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-system-configs\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.856826 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.856878 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.856985 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-run\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.857125 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.860980 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-pull\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.861031 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-push\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.878079 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmgnd\" (UniqueName: \"kubernetes.io/projected/ed72636d-17c7-4a75-a4c2-ee3c07e64541-kube-api-access-cmgnd\") pod \"sg-core-1-build\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " pod="service-telemetry/sg-core-1-build" Jan 21 10:10:25 crc kubenswrapper[5119]: I0121 10:10:25.929075 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 10:10:26 crc kubenswrapper[5119]: I0121 10:10:26.165192 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 10:10:26 crc kubenswrapper[5119]: W0121 10:10:26.166916 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded72636d_17c7_4a75_a4c2_ee3c07e64541.slice/crio-ee7baa34eaa257c1aafd77dde738cc682012844ec58ddba051b61de09c444820 WatchSource:0}: Error finding container ee7baa34eaa257c1aafd77dde738cc682012844ec58ddba051b61de09c444820: Status 404 returned error can't find the container with id ee7baa34eaa257c1aafd77dde738cc682012844ec58ddba051b61de09c444820 Jan 21 10:10:26 crc kubenswrapper[5119]: I0121 10:10:26.574891 5119 generic.go:358] "Generic (PLEG): container finished" podID="ed72636d-17c7-4a75-a4c2-ee3c07e64541" containerID="58117b41ab054a87be22906101f03cfd9032fcb9854317c813e1f59b0a5fba58" exitCode=0 Jan 21 10:10:26 crc kubenswrapper[5119]: I0121 10:10:26.575033 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"ed72636d-17c7-4a75-a4c2-ee3c07e64541","Type":"ContainerDied","Data":"58117b41ab054a87be22906101f03cfd9032fcb9854317c813e1f59b0a5fba58"} Jan 21 10:10:26 crc kubenswrapper[5119]: I0121 10:10:26.575082 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"ed72636d-17c7-4a75-a4c2-ee3c07e64541","Type":"ContainerStarted","Data":"ee7baa34eaa257c1aafd77dde738cc682012844ec58ddba051b61de09c444820"} Jan 21 10:10:27 crc kubenswrapper[5119]: I0121 10:10:27.586028 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"ed72636d-17c7-4a75-a4c2-ee3c07e64541","Type":"ContainerStarted","Data":"1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f"} Jan 21 10:10:27 crc kubenswrapper[5119]: I0121 10:10:27.611871 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=2.6118464169999998 podStartE2EDuration="2.611846417s" podCreationTimestamp="2026-01-21 10:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:10:27.602922304 +0000 UTC m=+943.271013982" watchObservedRunningTime="2026-01-21 10:10:27.611846417 +0000 UTC m=+943.279938135" Jan 21 10:10:35 crc kubenswrapper[5119]: I0121 10:10:35.910519 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 10:10:35 crc kubenswrapper[5119]: I0121 10:10:35.911344 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="ed72636d-17c7-4a75-a4c2-ee3c07e64541" containerName="docker-build" containerID="cri-o://1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f" gracePeriod=30 Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.328029 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_ed72636d-17c7-4a75-a4c2-ee3c07e64541/docker-build/0.log" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.328750 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498251 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-run\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498298 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-node-pullsecrets\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498339 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildworkdir\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498395 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmgnd\" (UniqueName: \"kubernetes.io/projected/ed72636d-17c7-4a75-a4c2-ee3c07e64541-kube-api-access-cmgnd\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498486 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-system-configs\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498512 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-pull\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498531 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-proxy-ca-bundles\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498564 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-ca-bundles\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498592 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-root\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498680 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildcachedir\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498723 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-blob-cache\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.498779 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-push\") pod \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\" (UID: \"ed72636d-17c7-4a75-a4c2-ee3c07e64541\") " Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.499444 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.499543 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.500098 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.500455 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.500510 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.500557 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.500589 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.504301 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.504633 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed72636d-17c7-4a75-a4c2-ee3c07e64541-kube-api-access-cmgnd" (OuterVolumeSpecName: "kube-api-access-cmgnd") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "kube-api-access-cmgnd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.504687 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.599861 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.599885 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.599894 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.599903 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.599913 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cmgnd\" (UniqueName: \"kubernetes.io/projected/ed72636d-17c7-4a75-a4c2-ee3c07e64541-kube-api-access-cmgnd\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.599923 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.599930 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/ed72636d-17c7-4a75-a4c2-ee3c07e64541-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.599938 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.599946 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.599954 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ed72636d-17c7-4a75-a4c2-ee3c07e64541-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.604581 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.650381 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_ed72636d-17c7-4a75-a4c2-ee3c07e64541/docker-build/0.log" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.650708 5119 generic.go:358] "Generic (PLEG): container finished" podID="ed72636d-17c7-4a75-a4c2-ee3c07e64541" containerID="1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f" exitCode=1 Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.650987 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"ed72636d-17c7-4a75-a4c2-ee3c07e64541","Type":"ContainerDied","Data":"1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f"} Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.651026 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"ed72636d-17c7-4a75-a4c2-ee3c07e64541","Type":"ContainerDied","Data":"ee7baa34eaa257c1aafd77dde738cc682012844ec58ddba051b61de09c444820"} Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.651045 5119 scope.go:117] "RemoveContainer" containerID="1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.651175 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.701061 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.710721 5119 scope.go:117] "RemoveContainer" containerID="58117b41ab054a87be22906101f03cfd9032fcb9854317c813e1f59b0a5fba58" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.779209 5119 scope.go:117] "RemoveContainer" containerID="1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f" Jan 21 10:10:36 crc kubenswrapper[5119]: E0121 10:10:36.779908 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f\": container with ID starting with 1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f not found: ID does not exist" containerID="1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.779951 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f"} err="failed to get container status \"1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f\": rpc error: code = NotFound desc = could not find container \"1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f\": container with ID starting with 1f8f0242a7f57211a4e64a1f9f73d347ae5712e3aa10e579e2bd6612b273cd1f not found: ID does not exist" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.779980 5119 scope.go:117] "RemoveContainer" containerID="58117b41ab054a87be22906101f03cfd9032fcb9854317c813e1f59b0a5fba58" Jan 21 10:10:36 crc kubenswrapper[5119]: E0121 10:10:36.780311 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58117b41ab054a87be22906101f03cfd9032fcb9854317c813e1f59b0a5fba58\": container with ID starting with 58117b41ab054a87be22906101f03cfd9032fcb9854317c813e1f59b0a5fba58 not found: ID does not exist" containerID="58117b41ab054a87be22906101f03cfd9032fcb9854317c813e1f59b0a5fba58" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.780349 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58117b41ab054a87be22906101f03cfd9032fcb9854317c813e1f59b0a5fba58"} err="failed to get container status \"58117b41ab054a87be22906101f03cfd9032fcb9854317c813e1f59b0a5fba58\": rpc error: code = NotFound desc = could not find container \"58117b41ab054a87be22906101f03cfd9032fcb9854317c813e1f59b0a5fba58\": container with ID starting with 58117b41ab054a87be22906101f03cfd9032fcb9854317c813e1f59b0a5fba58 not found: ID does not exist" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.803313 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ed72636d-17c7-4a75-a4c2-ee3c07e64541" (UID: "ed72636d-17c7-4a75-a4c2-ee3c07e64541"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.903198 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ed72636d-17c7-4a75-a4c2-ee3c07e64541-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.985455 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 10:10:36 crc kubenswrapper[5119]: I0121 10:10:36.992226 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.605004 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.606389 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed72636d-17c7-4a75-a4c2-ee3c07e64541" containerName="manage-dockerfile" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.606527 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed72636d-17c7-4a75-a4c2-ee3c07e64541" containerName="manage-dockerfile" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.606680 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed72636d-17c7-4a75-a4c2-ee3c07e64541" containerName="docker-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.606778 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed72636d-17c7-4a75-a4c2-ee3c07e64541" containerName="docker-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.607034 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="ed72636d-17c7-4a75-a4c2-ee3c07e64541" containerName="docker-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.634459 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.634716 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.639379 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-global-ca\"" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.639848 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.640158 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-ca\"" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.640391 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-sys-config\"" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714009 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-run\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714067 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-buildcachedir\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714130 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-system-configs\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714220 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-buildworkdir\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714244 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-pull\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714282 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714317 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-push\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714483 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-root\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714529 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714556 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714638 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtk6\" (UniqueName: \"kubernetes.io/projected/d31f303f-6cf4-4177-904a-97d7409af8e3-kube-api-access-7vtk6\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.714675 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.816235 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-run\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.816354 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-buildcachedir\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.816392 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-system-configs\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.816586 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-buildcachedir\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.816785 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-buildworkdir\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.816851 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-run\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.816893 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-pull\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.816959 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.817011 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-push\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.817129 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-root\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.817159 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-buildworkdir\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.817192 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.817263 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.817332 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.817442 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtk6\" (UniqueName: \"kubernetes.io/projected/d31f303f-6cf4-4177-904a-97d7409af8e3-kube-api-access-7vtk6\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.817512 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.818058 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-root\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.818295 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.818546 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.818574 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.819055 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-system-configs\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.827335 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-push\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.828215 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-pull\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.847783 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtk6\" (UniqueName: \"kubernetes.io/projected/d31f303f-6cf4-4177-904a-97d7409af8e3-kube-api-access-7vtk6\") pod \"sg-core-2-build\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " pod="service-telemetry/sg-core-2-build" Jan 21 10:10:37 crc kubenswrapper[5119]: I0121 10:10:37.954251 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 10:10:38 crc kubenswrapper[5119]: I0121 10:10:38.240837 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 21 10:10:38 crc kubenswrapper[5119]: W0121 10:10:38.243912 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd31f303f_6cf4_4177_904a_97d7409af8e3.slice/crio-6b08b98f58ea91d5ba279365c59ecaa01d644fd7dee39a9ca8d68e221be2d329 WatchSource:0}: Error finding container 6b08b98f58ea91d5ba279365c59ecaa01d644fd7dee39a9ca8d68e221be2d329: Status 404 returned error can't find the container with id 6b08b98f58ea91d5ba279365c59ecaa01d644fd7dee39a9ca8d68e221be2d329 Jan 21 10:10:38 crc kubenswrapper[5119]: I0121 10:10:38.599953 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed72636d-17c7-4a75-a4c2-ee3c07e64541" path="/var/lib/kubelet/pods/ed72636d-17c7-4a75-a4c2-ee3c07e64541/volumes" Jan 21 10:10:38 crc kubenswrapper[5119]: I0121 10:10:38.671264 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d31f303f-6cf4-4177-904a-97d7409af8e3","Type":"ContainerStarted","Data":"82ec11e159b6a390e42b6888ab69c6306e74593458c64376c65f0fb2ce22a605"} Jan 21 10:10:38 crc kubenswrapper[5119]: I0121 10:10:38.671654 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d31f303f-6cf4-4177-904a-97d7409af8e3","Type":"ContainerStarted","Data":"6b08b98f58ea91d5ba279365c59ecaa01d644fd7dee39a9ca8d68e221be2d329"} Jan 21 10:10:39 crc kubenswrapper[5119]: I0121 10:10:39.679818 5119 generic.go:358] "Generic (PLEG): container finished" podID="d31f303f-6cf4-4177-904a-97d7409af8e3" containerID="82ec11e159b6a390e42b6888ab69c6306e74593458c64376c65f0fb2ce22a605" exitCode=0 Jan 21 10:10:39 crc kubenswrapper[5119]: I0121 10:10:39.680022 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d31f303f-6cf4-4177-904a-97d7409af8e3","Type":"ContainerDied","Data":"82ec11e159b6a390e42b6888ab69c6306e74593458c64376c65f0fb2ce22a605"} Jan 21 10:10:40 crc kubenswrapper[5119]: I0121 10:10:40.688033 5119 generic.go:358] "Generic (PLEG): container finished" podID="d31f303f-6cf4-4177-904a-97d7409af8e3" containerID="53844f39b94b8bf834fa3fc6ff3f54220ab519c6bd60c5b5e9f88fa88f19cce9" exitCode=0 Jan 21 10:10:40 crc kubenswrapper[5119]: I0121 10:10:40.688228 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d31f303f-6cf4-4177-904a-97d7409af8e3","Type":"ContainerDied","Data":"53844f39b94b8bf834fa3fc6ff3f54220ab519c6bd60c5b5e9f88fa88f19cce9"} Jan 21 10:10:40 crc kubenswrapper[5119]: I0121 10:10:40.729158 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_d31f303f-6cf4-4177-904a-97d7409af8e3/manage-dockerfile/0.log" Jan 21 10:10:41 crc kubenswrapper[5119]: I0121 10:10:41.699250 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d31f303f-6cf4-4177-904a-97d7409af8e3","Type":"ContainerStarted","Data":"974fae7a58c068374f467296e44911da26e0edf36d235939dd1366c9f6bab756"} Jan 21 10:10:41 crc kubenswrapper[5119]: I0121 10:10:41.729225 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=4.729190425 podStartE2EDuration="4.729190425s" podCreationTimestamp="2026-01-21 10:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:10:41.724713063 +0000 UTC m=+957.392804761" watchObservedRunningTime="2026-01-21 10:10:41.729190425 +0000 UTC m=+957.397282173" Jan 21 10:10:47 crc kubenswrapper[5119]: I0121 10:10:47.217340 5119 scope.go:117] "RemoveContainer" containerID="3862bf87328c74f16a513eeb7d8ce8aeca2c4fe2745d75fdc8c458c32a83cd2c" Jan 21 10:12:00 crc kubenswrapper[5119]: I0121 10:12:00.162214 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483172-9blg4"] Jan 21 10:12:05 crc kubenswrapper[5119]: I0121 10:12:05.449796 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483172-9blg4" Jan 21 10:12:05 crc kubenswrapper[5119]: I0121 10:12:05.453676 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:12:05 crc kubenswrapper[5119]: I0121 10:12:05.453931 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:12:05 crc kubenswrapper[5119]: I0121 10:12:05.454217 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:12:05 crc kubenswrapper[5119]: I0121 10:12:05.464320 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483172-9blg4"] Jan 21 10:12:05 crc kubenswrapper[5119]: I0121 10:12:05.512368 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjxd\" (UniqueName: \"kubernetes.io/projected/0b6d1afc-21f7-4fdc-82af-808bff8dcc9f-kube-api-access-qjjxd\") pod \"auto-csr-approver-29483172-9blg4\" (UID: \"0b6d1afc-21f7-4fdc-82af-808bff8dcc9f\") " pod="openshift-infra/auto-csr-approver-29483172-9blg4" Jan 21 10:12:05 crc kubenswrapper[5119]: I0121 10:12:05.614892 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qjjxd\" (UniqueName: \"kubernetes.io/projected/0b6d1afc-21f7-4fdc-82af-808bff8dcc9f-kube-api-access-qjjxd\") pod \"auto-csr-approver-29483172-9blg4\" (UID: \"0b6d1afc-21f7-4fdc-82af-808bff8dcc9f\") " pod="openshift-infra/auto-csr-approver-29483172-9blg4" Jan 21 10:12:05 crc kubenswrapper[5119]: I0121 10:12:05.635933 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjjxd\" (UniqueName: \"kubernetes.io/projected/0b6d1afc-21f7-4fdc-82af-808bff8dcc9f-kube-api-access-qjjxd\") pod \"auto-csr-approver-29483172-9blg4\" (UID: \"0b6d1afc-21f7-4fdc-82af-808bff8dcc9f\") " pod="openshift-infra/auto-csr-approver-29483172-9blg4" Jan 21 10:12:05 crc kubenswrapper[5119]: I0121 10:12:05.779415 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483172-9blg4" Jan 21 10:12:06 crc kubenswrapper[5119]: I0121 10:12:06.251208 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483172-9blg4"] Jan 21 10:12:07 crc kubenswrapper[5119]: I0121 10:12:07.247999 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483172-9blg4" event={"ID":"0b6d1afc-21f7-4fdc-82af-808bff8dcc9f","Type":"ContainerStarted","Data":"6a3c85d9956a4512eae56f2e7c4e59fc81a57e22d39bbc9b4fd512204b1e6d6a"} Jan 21 10:12:09 crc kubenswrapper[5119]: I0121 10:12:09.269584 5119 generic.go:358] "Generic (PLEG): container finished" podID="0b6d1afc-21f7-4fdc-82af-808bff8dcc9f" containerID="6b2667245906ac565f5c1bb307f1b14021920f44ce31f173d52c9469311cbca8" exitCode=0 Jan 21 10:12:09 crc kubenswrapper[5119]: I0121 10:12:09.269730 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483172-9blg4" event={"ID":"0b6d1afc-21f7-4fdc-82af-808bff8dcc9f","Type":"ContainerDied","Data":"6b2667245906ac565f5c1bb307f1b14021920f44ce31f173d52c9469311cbca8"} Jan 21 10:12:10 crc kubenswrapper[5119]: I0121 10:12:10.465384 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483172-9blg4" Jan 21 10:12:10 crc kubenswrapper[5119]: I0121 10:12:10.577076 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjjxd\" (UniqueName: \"kubernetes.io/projected/0b6d1afc-21f7-4fdc-82af-808bff8dcc9f-kube-api-access-qjjxd\") pod \"0b6d1afc-21f7-4fdc-82af-808bff8dcc9f\" (UID: \"0b6d1afc-21f7-4fdc-82af-808bff8dcc9f\") " Jan 21 10:12:10 crc kubenswrapper[5119]: I0121 10:12:10.583343 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6d1afc-21f7-4fdc-82af-808bff8dcc9f-kube-api-access-qjjxd" (OuterVolumeSpecName: "kube-api-access-qjjxd") pod "0b6d1afc-21f7-4fdc-82af-808bff8dcc9f" (UID: "0b6d1afc-21f7-4fdc-82af-808bff8dcc9f"). InnerVolumeSpecName "kube-api-access-qjjxd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:12:10 crc kubenswrapper[5119]: I0121 10:12:10.678908 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qjjxd\" (UniqueName: \"kubernetes.io/projected/0b6d1afc-21f7-4fdc-82af-808bff8dcc9f-kube-api-access-qjjxd\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:11 crc kubenswrapper[5119]: I0121 10:12:11.285064 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483172-9blg4" Jan 21 10:12:11 crc kubenswrapper[5119]: I0121 10:12:11.285095 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483172-9blg4" event={"ID":"0b6d1afc-21f7-4fdc-82af-808bff8dcc9f","Type":"ContainerDied","Data":"6a3c85d9956a4512eae56f2e7c4e59fc81a57e22d39bbc9b4fd512204b1e6d6a"} Jan 21 10:12:11 crc kubenswrapper[5119]: I0121 10:12:11.285141 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a3c85d9956a4512eae56f2e7c4e59fc81a57e22d39bbc9b4fd512204b1e6d6a" Jan 21 10:12:11 crc kubenswrapper[5119]: I0121 10:12:11.526365 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483166-rsv6r"] Jan 21 10:12:11 crc kubenswrapper[5119]: I0121 10:12:11.531935 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483166-rsv6r"] Jan 21 10:12:12 crc kubenswrapper[5119]: I0121 10:12:12.598351 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c216007e-7167-4880-af30-706cc2a590f8" path="/var/lib/kubelet/pods/c216007e-7167-4880-af30-706cc2a590f8/volumes" Jan 21 10:12:47 crc kubenswrapper[5119]: I0121 10:12:47.324110 5119 scope.go:117] "RemoveContainer" containerID="e6fda05d2c086a71d2b104f8fea16ab011a1923fc7c32ea88db544c8cb21a193" Jan 21 10:12:49 crc kubenswrapper[5119]: I0121 10:12:49.918816 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:12:49 crc kubenswrapper[5119]: I0121 10:12:49.919126 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:13:19 crc kubenswrapper[5119]: I0121 10:13:19.919000 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:13:19 crc kubenswrapper[5119]: I0121 10:13:19.919539 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:13:49 crc kubenswrapper[5119]: I0121 10:13:49.919391 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:13:49 crc kubenswrapper[5119]: I0121 10:13:49.919986 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:13:49 crc kubenswrapper[5119]: I0121 10:13:49.920033 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:13:49 crc kubenswrapper[5119]: I0121 10:13:49.920731 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"639f312973ca7ed8a8e84c76403e2b53399a57adb4ec6e14a566fd4af25f3c2b"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:13:49 crc kubenswrapper[5119]: I0121 10:13:49.920800 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://639f312973ca7ed8a8e84c76403e2b53399a57adb4ec6e14a566fd4af25f3c2b" gracePeriod=600 Jan 21 10:13:55 crc kubenswrapper[5119]: I0121 10:13:55.051285 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="639f312973ca7ed8a8e84c76403e2b53399a57adb4ec6e14a566fd4af25f3c2b" exitCode=0 Jan 21 10:13:55 crc kubenswrapper[5119]: I0121 10:13:55.051336 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"639f312973ca7ed8a8e84c76403e2b53399a57adb4ec6e14a566fd4af25f3c2b"} Jan 21 10:13:55 crc kubenswrapper[5119]: I0121 10:13:55.052644 5119 scope.go:117] "RemoveContainer" containerID="90484e72f46c3fbcf88c1033b1658a1d68108d21cb7c6bed596a53764123001b" Jan 21 10:14:00 crc kubenswrapper[5119]: I0121 10:14:00.145675 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483174-p2cmj"] Jan 21 10:14:00 crc kubenswrapper[5119]: I0121 10:14:00.146476 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b6d1afc-21f7-4fdc-82af-808bff8dcc9f" containerName="oc" Jan 21 10:14:00 crc kubenswrapper[5119]: I0121 10:14:00.146492 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6d1afc-21f7-4fdc-82af-808bff8dcc9f" containerName="oc" Jan 21 10:14:00 crc kubenswrapper[5119]: I0121 10:14:00.146657 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="0b6d1afc-21f7-4fdc-82af-808bff8dcc9f" containerName="oc" Jan 21 10:14:09 crc kubenswrapper[5119]: I0121 10:14:09.109015 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483174-p2cmj"] Jan 21 10:14:09 crc kubenswrapper[5119]: I0121 10:14:09.114337 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483174-p2cmj" Jan 21 10:14:09 crc kubenswrapper[5119]: I0121 10:14:09.121156 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:14:09 crc kubenswrapper[5119]: I0121 10:14:09.121415 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:14:09 crc kubenswrapper[5119]: I0121 10:14:09.121172 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:14:09 crc kubenswrapper[5119]: I0121 10:14:09.235376 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s5bm\" (UniqueName: \"kubernetes.io/projected/21745b5e-6ff2-4a6e-a97f-406c11e58793-kube-api-access-6s5bm\") pod \"auto-csr-approver-29483174-p2cmj\" (UID: \"21745b5e-6ff2-4a6e-a97f-406c11e58793\") " pod="openshift-infra/auto-csr-approver-29483174-p2cmj" Jan 21 10:14:09 crc kubenswrapper[5119]: I0121 10:14:09.337351 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6s5bm\" (UniqueName: \"kubernetes.io/projected/21745b5e-6ff2-4a6e-a97f-406c11e58793-kube-api-access-6s5bm\") pod \"auto-csr-approver-29483174-p2cmj\" (UID: \"21745b5e-6ff2-4a6e-a97f-406c11e58793\") " pod="openshift-infra/auto-csr-approver-29483174-p2cmj" Jan 21 10:14:09 crc kubenswrapper[5119]: I0121 10:14:09.357200 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s5bm\" (UniqueName: \"kubernetes.io/projected/21745b5e-6ff2-4a6e-a97f-406c11e58793-kube-api-access-6s5bm\") pod \"auto-csr-approver-29483174-p2cmj\" (UID: \"21745b5e-6ff2-4a6e-a97f-406c11e58793\") " pod="openshift-infra/auto-csr-approver-29483174-p2cmj" Jan 21 10:14:09 crc kubenswrapper[5119]: I0121 10:14:09.442100 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483174-p2cmj" Jan 21 10:14:09 crc kubenswrapper[5119]: W0121 10:14:09.624097 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21745b5e_6ff2_4a6e_a97f_406c11e58793.slice/crio-c824ac65689aaa9f906b60fcba653c0fbc1ee18848aa3a7606dfba908fdf5f86 WatchSource:0}: Error finding container c824ac65689aaa9f906b60fcba653c0fbc1ee18848aa3a7606dfba908fdf5f86: Status 404 returned error can't find the container with id c824ac65689aaa9f906b60fcba653c0fbc1ee18848aa3a7606dfba908fdf5f86 Jan 21 10:14:09 crc kubenswrapper[5119]: I0121 10:14:09.635004 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483174-p2cmj"] Jan 21 10:14:10 crc kubenswrapper[5119]: I0121 10:14:10.175923 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"e1293b9d5697f64c75dfcb0e9afb6682f3461979e9927eeb7658215e6f071a1d"} Jan 21 10:14:10 crc kubenswrapper[5119]: I0121 10:14:10.177011 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483174-p2cmj" event={"ID":"21745b5e-6ff2-4a6e-a97f-406c11e58793","Type":"ContainerStarted","Data":"c824ac65689aaa9f906b60fcba653c0fbc1ee18848aa3a7606dfba908fdf5f86"} Jan 21 10:14:15 crc kubenswrapper[5119]: I0121 10:14:15.222382 5119 generic.go:358] "Generic (PLEG): container finished" podID="21745b5e-6ff2-4a6e-a97f-406c11e58793" containerID="2e4862dfd5c5374082fa377cf2c8c7fd865efaa7c3b0474857cef9eaf047116b" exitCode=0 Jan 21 10:14:15 crc kubenswrapper[5119]: I0121 10:14:15.222473 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483174-p2cmj" event={"ID":"21745b5e-6ff2-4a6e-a97f-406c11e58793","Type":"ContainerDied","Data":"2e4862dfd5c5374082fa377cf2c8c7fd865efaa7c3b0474857cef9eaf047116b"} Jan 21 10:14:16 crc kubenswrapper[5119]: I0121 10:14:16.532026 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483174-p2cmj" Jan 21 10:14:16 crc kubenswrapper[5119]: I0121 10:14:16.641832 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s5bm\" (UniqueName: \"kubernetes.io/projected/21745b5e-6ff2-4a6e-a97f-406c11e58793-kube-api-access-6s5bm\") pod \"21745b5e-6ff2-4a6e-a97f-406c11e58793\" (UID: \"21745b5e-6ff2-4a6e-a97f-406c11e58793\") " Jan 21 10:14:16 crc kubenswrapper[5119]: I0121 10:14:16.647816 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21745b5e-6ff2-4a6e-a97f-406c11e58793-kube-api-access-6s5bm" (OuterVolumeSpecName: "kube-api-access-6s5bm") pod "21745b5e-6ff2-4a6e-a97f-406c11e58793" (UID: "21745b5e-6ff2-4a6e-a97f-406c11e58793"). InnerVolumeSpecName "kube-api-access-6s5bm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:14:16 crc kubenswrapper[5119]: I0121 10:14:16.745445 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6s5bm\" (UniqueName: \"kubernetes.io/projected/21745b5e-6ff2-4a6e-a97f-406c11e58793-kube-api-access-6s5bm\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:17 crc kubenswrapper[5119]: I0121 10:14:17.238730 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483174-p2cmj" Jan 21 10:14:17 crc kubenswrapper[5119]: I0121 10:14:17.238728 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483174-p2cmj" event={"ID":"21745b5e-6ff2-4a6e-a97f-406c11e58793","Type":"ContainerDied","Data":"c824ac65689aaa9f906b60fcba653c0fbc1ee18848aa3a7606dfba908fdf5f86"} Jan 21 10:14:17 crc kubenswrapper[5119]: I0121 10:14:17.238879 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c824ac65689aaa9f906b60fcba653c0fbc1ee18848aa3a7606dfba908fdf5f86" Jan 21 10:14:17 crc kubenswrapper[5119]: I0121 10:14:17.593464 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483168-6b2lx"] Jan 21 10:14:17 crc kubenswrapper[5119]: I0121 10:14:17.599316 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483168-6b2lx"] Jan 21 10:14:18 crc kubenswrapper[5119]: I0121 10:14:18.605855 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5418d84c-9649-4f00-ae33-e55a33c042ff" path="/var/lib/kubelet/pods/5418d84c-9649-4f00-ae33-e55a33c042ff/volumes" Jan 21 10:14:31 crc kubenswrapper[5119]: I0121 10:14:31.336801 5119 generic.go:358] "Generic (PLEG): container finished" podID="d31f303f-6cf4-4177-904a-97d7409af8e3" containerID="974fae7a58c068374f467296e44911da26e0edf36d235939dd1366c9f6bab756" exitCode=0 Jan 21 10:14:31 crc kubenswrapper[5119]: I0121 10:14:31.336870 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d31f303f-6cf4-4177-904a-97d7409af8e3","Type":"ContainerDied","Data":"974fae7a58c068374f467296e44911da26e0edf36d235939dd1366c9f6bab756"} Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.574322 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665072 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vtk6\" (UniqueName: \"kubernetes.io/projected/d31f303f-6cf4-4177-904a-97d7409af8e3-kube-api-access-7vtk6\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665163 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-push\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665187 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-build-blob-cache\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665212 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-node-pullsecrets\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665312 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-run\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665340 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-buildcachedir\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665354 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-pull\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665370 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-system-configs\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665388 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-root\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665413 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-proxy-ca-bundles\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665438 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-buildworkdir\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665468 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-ca-bundles\") pod \"d31f303f-6cf4-4177-904a-97d7409af8e3\" (UID: \"d31f303f-6cf4-4177-904a-97d7409af8e3\") " Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665880 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.665906 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.666367 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.666384 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.666679 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.666984 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.670898 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d31f303f-6cf4-4177-904a-97d7409af8e3-kube-api-access-7vtk6" (OuterVolumeSpecName: "kube-api-access-7vtk6") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "kube-api-access-7vtk6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.671002 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.672309 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.678789 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.767205 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.767251 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.767266 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.767279 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.767290 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.767303 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.767315 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d31f303f-6cf4-4177-904a-97d7409af8e3-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.767326 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7vtk6\" (UniqueName: \"kubernetes.io/projected/d31f303f-6cf4-4177-904a-97d7409af8e3-kube-api-access-7vtk6\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.767337 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/d31f303f-6cf4-4177-904a-97d7409af8e3-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.767348 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d31f303f-6cf4-4177-904a-97d7409af8e3-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:32 crc kubenswrapper[5119]: I0121 10:14:32.982685 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:14:33 crc kubenswrapper[5119]: I0121 10:14:33.070515 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:33 crc kubenswrapper[5119]: I0121 10:14:33.352964 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d31f303f-6cf4-4177-904a-97d7409af8e3","Type":"ContainerDied","Data":"6b08b98f58ea91d5ba279365c59ecaa01d644fd7dee39a9ca8d68e221be2d329"} Jan 21 10:14:33 crc kubenswrapper[5119]: I0121 10:14:33.353002 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b08b98f58ea91d5ba279365c59ecaa01d644fd7dee39a9ca8d68e221be2d329" Jan 21 10:14:33 crc kubenswrapper[5119]: I0121 10:14:33.353152 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 10:14:35 crc kubenswrapper[5119]: I0121 10:14:35.024719 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d31f303f-6cf4-4177-904a-97d7409af8e3" (UID: "d31f303f-6cf4-4177-904a-97d7409af8e3"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:14:35 crc kubenswrapper[5119]: I0121 10:14:35.093509 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d31f303f-6cf4-4177-904a-97d7409af8e3-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.377041 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.378149 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d31f303f-6cf4-4177-904a-97d7409af8e3" containerName="manage-dockerfile" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.378173 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31f303f-6cf4-4177-904a-97d7409af8e3" containerName="manage-dockerfile" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.378191 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d31f303f-6cf4-4177-904a-97d7409af8e3" containerName="git-clone" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.378198 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31f303f-6cf4-4177-904a-97d7409af8e3" containerName="git-clone" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.378215 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d31f303f-6cf4-4177-904a-97d7409af8e3" containerName="docker-build" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.378223 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31f303f-6cf4-4177-904a-97d7409af8e3" containerName="docker-build" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.378234 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="21745b5e-6ff2-4a6e-a97f-406c11e58793" containerName="oc" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.378242 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="21745b5e-6ff2-4a6e-a97f-406c11e58793" containerName="oc" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.378376 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="d31f303f-6cf4-4177-904a-97d7409af8e3" containerName="docker-build" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.378395 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="21745b5e-6ff2-4a6e-a97f-406c11e58793" containerName="oc" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.873624 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.873631 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.877109 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-sys-config\"" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.877426 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.881036 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-ca\"" Jan 21 10:14:37 crc kubenswrapper[5119]: I0121 10:14:37.882413 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-global-ca\"" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.032675 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.032728 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.032770 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf4mn\" (UniqueName: \"kubernetes.io/projected/a166511c-c247-4e08-8931-3dda90bee86d-kube-api-access-nf4mn\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.032792 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.032822 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.032848 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-pull\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.032869 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-push\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.032906 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.032920 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.032949 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.032977 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.033005 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.134009 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4mn\" (UniqueName: \"kubernetes.io/projected/a166511c-c247-4e08-8931-3dda90bee86d-kube-api-access-nf4mn\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.134060 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.134084 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.134399 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-pull\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.134478 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.134490 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-push\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.134662 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.134753 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.134850 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.135190 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.135661 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.136040 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.136319 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.134774 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.135586 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.134975 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.135986 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.135014 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.136281 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.135145 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.137400 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.140506 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-pull\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.147330 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-push\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.150401 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf4mn\" (UniqueName: \"kubernetes.io/projected/a166511c-c247-4e08-8931-3dda90bee86d-kube-api-access-nf4mn\") pod \"sg-bridge-1-build\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.192474 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:38 crc kubenswrapper[5119]: I0121 10:14:38.599448 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 10:14:39 crc kubenswrapper[5119]: I0121 10:14:39.395665 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"a166511c-c247-4e08-8931-3dda90bee86d","Type":"ContainerStarted","Data":"f9c1631d30857d0491fc9ca080e49d8dc3d1d88ffedd04fb988ccccb4e30869e"} Jan 21 10:14:40 crc kubenswrapper[5119]: I0121 10:14:40.403181 5119 generic.go:358] "Generic (PLEG): container finished" podID="a166511c-c247-4e08-8931-3dda90bee86d" containerID="01eb22fa77ac0fa234110256d019f3e66ee12e7f4077d5d78bd95086b22b090d" exitCode=0 Jan 21 10:14:40 crc kubenswrapper[5119]: I0121 10:14:40.403310 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"a166511c-c247-4e08-8931-3dda90bee86d","Type":"ContainerDied","Data":"01eb22fa77ac0fa234110256d019f3e66ee12e7f4077d5d78bd95086b22b090d"} Jan 21 10:14:41 crc kubenswrapper[5119]: I0121 10:14:41.414283 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"a166511c-c247-4e08-8931-3dda90bee86d","Type":"ContainerStarted","Data":"57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f"} Jan 21 10:14:41 crc kubenswrapper[5119]: I0121 10:14:41.434873 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=4.434850594 podStartE2EDuration="4.434850594s" podCreationTimestamp="2026-01-21 10:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:14:41.434589768 +0000 UTC m=+1197.102681436" watchObservedRunningTime="2026-01-21 10:14:41.434850594 +0000 UTC m=+1197.102942282" Jan 21 10:14:45 crc kubenswrapper[5119]: I0121 10:14:45.908990 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:14:45 crc kubenswrapper[5119]: I0121 10:14:45.909020 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:14:45 crc kubenswrapper[5119]: I0121 10:14:45.915671 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:14:45 crc kubenswrapper[5119]: I0121 10:14:45.916226 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:14:47 crc kubenswrapper[5119]: I0121 10:14:47.424772 5119 scope.go:117] "RemoveContainer" containerID="296eeb23a2bfc08eee248f9406806426948a4cbe233f7112f6438c9e6086a468" Jan 21 10:14:47 crc kubenswrapper[5119]: I0121 10:14:47.714019 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 10:14:47 crc kubenswrapper[5119]: I0121 10:14:47.714278 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="a166511c-c247-4e08-8931-3dda90bee86d" containerName="docker-build" containerID="cri-o://57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f" gracePeriod=30 Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.082460 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_a166511c-c247-4e08-8931-3dda90bee86d/docker-build/0.log" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.083125 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164408 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-root\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164503 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-buildcachedir\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164532 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-push\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164572 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-node-pullsecrets\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164594 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf4mn\" (UniqueName: \"kubernetes.io/projected/a166511c-c247-4e08-8931-3dda90bee86d-kube-api-access-nf4mn\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164648 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-ca-bundles\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164668 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-run\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164706 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-buildworkdir\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164727 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-system-configs\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164784 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-pull\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164813 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-build-blob-cache\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.164841 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-proxy-ca-bundles\") pod \"a166511c-c247-4e08-8931-3dda90bee86d\" (UID: \"a166511c-c247-4e08-8931-3dda90bee86d\") " Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.165785 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.166150 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.165816 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.165848 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.165883 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.166277 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.166303 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.166674 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.171299 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.171342 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a166511c-c247-4e08-8931-3dda90bee86d-kube-api-access-nf4mn" (OuterVolumeSpecName: "kube-api-access-nf4mn") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "kube-api-access-nf4mn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.171426 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.223473 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.267578 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a166511c-c247-4e08-8931-3dda90bee86d-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.267632 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.267651 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nf4mn\" (UniqueName: \"kubernetes.io/projected/a166511c-c247-4e08-8931-3dda90bee86d-kube-api-access-nf4mn\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.267662 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.267673 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.267686 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.267696 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.267707 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/a166511c-c247-4e08-8931-3dda90bee86d-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.267718 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.267730 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a166511c-c247-4e08-8931-3dda90bee86d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.301477 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a166511c-c247-4e08-8931-3dda90bee86d" (UID: "a166511c-c247-4e08-8931-3dda90bee86d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.368667 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a166511c-c247-4e08-8931-3dda90bee86d-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.461582 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_a166511c-c247-4e08-8931-3dda90bee86d/docker-build/0.log" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.462509 5119 generic.go:358] "Generic (PLEG): container finished" podID="a166511c-c247-4e08-8931-3dda90bee86d" containerID="57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f" exitCode=1 Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.462538 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"a166511c-c247-4e08-8931-3dda90bee86d","Type":"ContainerDied","Data":"57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f"} Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.462628 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.462646 5119 scope.go:117] "RemoveContainer" containerID="57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.462628 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"a166511c-c247-4e08-8931-3dda90bee86d","Type":"ContainerDied","Data":"f9c1631d30857d0491fc9ca080e49d8dc3d1d88ffedd04fb988ccccb4e30869e"} Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.505887 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.510427 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.520305 5119 scope.go:117] "RemoveContainer" containerID="01eb22fa77ac0fa234110256d019f3e66ee12e7f4077d5d78bd95086b22b090d" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.578897 5119 scope.go:117] "RemoveContainer" containerID="57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f" Jan 21 10:14:48 crc kubenswrapper[5119]: E0121 10:14:48.579356 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f\": container with ID starting with 57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f not found: ID does not exist" containerID="57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.579407 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f"} err="failed to get container status \"57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f\": rpc error: code = NotFound desc = could not find container \"57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f\": container with ID starting with 57644da3910fc188ff895b82d59e9254629b184bfb3117064840f78b72b5357f not found: ID does not exist" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.579428 5119 scope.go:117] "RemoveContainer" containerID="01eb22fa77ac0fa234110256d019f3e66ee12e7f4077d5d78bd95086b22b090d" Jan 21 10:14:48 crc kubenswrapper[5119]: E0121 10:14:48.579689 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01eb22fa77ac0fa234110256d019f3e66ee12e7f4077d5d78bd95086b22b090d\": container with ID starting with 01eb22fa77ac0fa234110256d019f3e66ee12e7f4077d5d78bd95086b22b090d not found: ID does not exist" containerID="01eb22fa77ac0fa234110256d019f3e66ee12e7f4077d5d78bd95086b22b090d" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.579713 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01eb22fa77ac0fa234110256d019f3e66ee12e7f4077d5d78bd95086b22b090d"} err="failed to get container status \"01eb22fa77ac0fa234110256d019f3e66ee12e7f4077d5d78bd95086b22b090d\": rpc error: code = NotFound desc = could not find container \"01eb22fa77ac0fa234110256d019f3e66ee12e7f4077d5d78bd95086b22b090d\": container with ID starting with 01eb22fa77ac0fa234110256d019f3e66ee12e7f4077d5d78bd95086b22b090d not found: ID does not exist" Jan 21 10:14:48 crc kubenswrapper[5119]: I0121 10:14:48.597416 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a166511c-c247-4e08-8931-3dda90bee86d" path="/var/lib/kubelet/pods/a166511c-c247-4e08-8931-3dda90bee86d/volumes" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.309486 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.310247 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a166511c-c247-4e08-8931-3dda90bee86d" containerName="manage-dockerfile" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.310273 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="a166511c-c247-4e08-8931-3dda90bee86d" containerName="manage-dockerfile" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.310309 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a166511c-c247-4e08-8931-3dda90bee86d" containerName="docker-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.310317 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="a166511c-c247-4e08-8931-3dda90bee86d" containerName="docker-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.310444 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="a166511c-c247-4e08-8931-3dda90bee86d" containerName="docker-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.542735 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.542926 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.545227 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-global-ca\"" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.545363 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.546021 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-sys-config\"" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.546674 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-ca\"" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.590745 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.590809 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-push\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.591008 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.591049 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.591071 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.591188 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.591271 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.591329 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.591354 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl774\" (UniqueName: \"kubernetes.io/projected/b392096e-f869-42e4-b405-995e0adf0568-kube-api-access-dl774\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.591382 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.591402 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.591442 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-pull\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.692926 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.692985 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dl774\" (UniqueName: \"kubernetes.io/projected/b392096e-f869-42e4-b405-995e0adf0568-kube-api-access-dl774\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.693024 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.693049 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.693257 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.693591 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.693902 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.694258 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-pull\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.695275 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.695314 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-push\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.695340 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.695648 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.695711 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.695771 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.695903 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.696244 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.696273 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.696369 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.696998 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.701002 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-pull\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.702793 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.703652 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.709156 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-push\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.719542 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl774\" (UniqueName: \"kubernetes.io/projected/b392096e-f869-42e4-b405-995e0adf0568-kube-api-access-dl774\") pod \"sg-bridge-2-build\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:49 crc kubenswrapper[5119]: I0121 10:14:49.870594 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 10:14:50 crc kubenswrapper[5119]: I0121 10:14:50.295428 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 21 10:14:50 crc kubenswrapper[5119]: I0121 10:14:50.476514 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"b392096e-f869-42e4-b405-995e0adf0568","Type":"ContainerStarted","Data":"9fbf89aae9d87e14cf9191748678c192ee8ae11f29b29d65c3ea27f2ac5e9182"} Jan 21 10:14:51 crc kubenswrapper[5119]: I0121 10:14:51.485100 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"b392096e-f869-42e4-b405-995e0adf0568","Type":"ContainerStarted","Data":"76367b7775cec5c2db1423bfed218657fad47b1052c9f75893b99b2262b10a20"} Jan 21 10:14:52 crc kubenswrapper[5119]: I0121 10:14:52.491695 5119 generic.go:358] "Generic (PLEG): container finished" podID="b392096e-f869-42e4-b405-995e0adf0568" containerID="76367b7775cec5c2db1423bfed218657fad47b1052c9f75893b99b2262b10a20" exitCode=0 Jan 21 10:14:52 crc kubenswrapper[5119]: I0121 10:14:52.491795 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"b392096e-f869-42e4-b405-995e0adf0568","Type":"ContainerDied","Data":"76367b7775cec5c2db1423bfed218657fad47b1052c9f75893b99b2262b10a20"} Jan 21 10:14:53 crc kubenswrapper[5119]: I0121 10:14:53.500970 5119 generic.go:358] "Generic (PLEG): container finished" podID="b392096e-f869-42e4-b405-995e0adf0568" containerID="512fd8f8e71cb7cfabf37f24c7619fbc9d129cd54cb52547f5287bf6f5f002f2" exitCode=0 Jan 21 10:14:53 crc kubenswrapper[5119]: I0121 10:14:53.501010 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"b392096e-f869-42e4-b405-995e0adf0568","Type":"ContainerDied","Data":"512fd8f8e71cb7cfabf37f24c7619fbc9d129cd54cb52547f5287bf6f5f002f2"} Jan 21 10:14:53 crc kubenswrapper[5119]: I0121 10:14:53.534951 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_b392096e-f869-42e4-b405-995e0adf0568/manage-dockerfile/0.log" Jan 21 10:14:54 crc kubenswrapper[5119]: I0121 10:14:54.509970 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"b392096e-f869-42e4-b405-995e0adf0568","Type":"ContainerStarted","Data":"71125eb85942716b0abd733fadc46d80a3f5ce4c4000a98fbeeafe7277d93c47"} Jan 21 10:14:54 crc kubenswrapper[5119]: I0121 10:14:54.540787 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.540762928 podStartE2EDuration="5.540762928s" podCreationTimestamp="2026-01-21 10:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:14:54.536337978 +0000 UTC m=+1210.204429666" watchObservedRunningTime="2026-01-21 10:14:54.540762928 +0000 UTC m=+1210.208854606" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.137099 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7"] Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.169927 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7"] Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.170082 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.173116 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.173985 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.257289 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-secret-volume\") pod \"collect-profiles-29483175-h66w7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.257692 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-config-volume\") pod \"collect-profiles-29483175-h66w7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.257897 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-864ld\" (UniqueName: \"kubernetes.io/projected/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-kube-api-access-864ld\") pod \"collect-profiles-29483175-h66w7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.359261 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-config-volume\") pod \"collect-profiles-29483175-h66w7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.359582 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-864ld\" (UniqueName: \"kubernetes.io/projected/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-kube-api-access-864ld\") pod \"collect-profiles-29483175-h66w7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.359736 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-secret-volume\") pod \"collect-profiles-29483175-h66w7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.360759 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-config-volume\") pod \"collect-profiles-29483175-h66w7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.375757 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-secret-volume\") pod \"collect-profiles-29483175-h66w7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.380536 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-864ld\" (UniqueName: \"kubernetes.io/projected/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-kube-api-access-864ld\") pod \"collect-profiles-29483175-h66w7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.489488 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.921301 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7"] Jan 21 10:15:00 crc kubenswrapper[5119]: I0121 10:15:00.939505 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:15:01 crc kubenswrapper[5119]: I0121 10:15:01.564989 5119 generic.go:358] "Generic (PLEG): container finished" podID="0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7" containerID="2b322f1d7d78f08e77443a5f3021fc045b560ada86c757684b1962d8710ad259" exitCode=0 Jan 21 10:15:01 crc kubenswrapper[5119]: I0121 10:15:01.565131 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" event={"ID":"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7","Type":"ContainerDied","Data":"2b322f1d7d78f08e77443a5f3021fc045b560ada86c757684b1962d8710ad259"} Jan 21 10:15:01 crc kubenswrapper[5119]: I0121 10:15:01.565336 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" event={"ID":"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7","Type":"ContainerStarted","Data":"a8e72084f634449763818e197c7f8098f24e9e77d4585d7c62f60e9f1472f8be"} Jan 21 10:15:02 crc kubenswrapper[5119]: I0121 10:15:02.819238 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:02 crc kubenswrapper[5119]: I0121 10:15:02.896452 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-config-volume\") pod \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " Jan 21 10:15:02 crc kubenswrapper[5119]: I0121 10:15:02.896553 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-secret-volume\") pod \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " Jan 21 10:15:02 crc kubenswrapper[5119]: I0121 10:15:02.896696 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-864ld\" (UniqueName: \"kubernetes.io/projected/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-kube-api-access-864ld\") pod \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\" (UID: \"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7\") " Jan 21 10:15:02 crc kubenswrapper[5119]: I0121 10:15:02.897045 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7" (UID: "0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:15:02 crc kubenswrapper[5119]: I0121 10:15:02.905496 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7" (UID: "0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:15:02 crc kubenswrapper[5119]: I0121 10:15:02.910767 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-kube-api-access-864ld" (OuterVolumeSpecName: "kube-api-access-864ld") pod "0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7" (UID: "0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7"). InnerVolumeSpecName "kube-api-access-864ld". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:15:02 crc kubenswrapper[5119]: I0121 10:15:02.998571 5119 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:02 crc kubenswrapper[5119]: I0121 10:15:02.998671 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-864ld\" (UniqueName: \"kubernetes.io/projected/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-kube-api-access-864ld\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:02 crc kubenswrapper[5119]: I0121 10:15:02.998690 5119 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:03 crc kubenswrapper[5119]: I0121 10:15:03.578848 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" Jan 21 10:15:03 crc kubenswrapper[5119]: I0121 10:15:03.578855 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7" event={"ID":"0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7","Type":"ContainerDied","Data":"a8e72084f634449763818e197c7f8098f24e9e77d4585d7c62f60e9f1472f8be"} Jan 21 10:15:03 crc kubenswrapper[5119]: I0121 10:15:03.578911 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8e72084f634449763818e197c7f8098f24e9e77d4585d7c62f60e9f1472f8be" Jan 21 10:15:46 crc kubenswrapper[5119]: I0121 10:15:46.889747 5119 generic.go:358] "Generic (PLEG): container finished" podID="b392096e-f869-42e4-b405-995e0adf0568" containerID="71125eb85942716b0abd733fadc46d80a3f5ce4c4000a98fbeeafe7277d93c47" exitCode=0 Jan 21 10:15:46 crc kubenswrapper[5119]: I0121 10:15:46.891263 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"b392096e-f869-42e4-b405-995e0adf0568","Type":"ContainerDied","Data":"71125eb85942716b0abd733fadc46d80a3f5ce4c4000a98fbeeafe7277d93c47"} Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.098078 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.117882 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-system-configs\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.117934 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-root\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.118023 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-buildcachedir\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.118041 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-pull\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.118083 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl774\" (UniqueName: \"kubernetes.io/projected/b392096e-f869-42e4-b405-995e0adf0568-kube-api-access-dl774\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.118137 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-proxy-ca-bundles\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.118266 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-run\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.118285 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-ca-bundles\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.118299 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-buildworkdir\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.118374 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-node-pullsecrets\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.118400 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-push\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.118428 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-build-blob-cache\") pod \"b392096e-f869-42e4-b405-995e0adf0568\" (UID: \"b392096e-f869-42e4-b405-995e0adf0568\") " Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.119144 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.119403 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.120443 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.120639 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.124754 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.125747 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.126375 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.127865 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.129565 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.129709 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b392096e-f869-42e4-b405-995e0adf0568-kube-api-access-dl774" (OuterVolumeSpecName: "kube-api-access-dl774") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "kube-api-access-dl774". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.220551 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.220641 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.220656 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.220666 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.220677 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.220688 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.220698 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b392096e-f869-42e4-b405-995e0adf0568-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.220708 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/b392096e-f869-42e4-b405-995e0adf0568-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.220718 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dl774\" (UniqueName: \"kubernetes.io/projected/b392096e-f869-42e4-b405-995e0adf0568-kube-api-access-dl774\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.220728 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b392096e-f869-42e4-b405-995e0adf0568-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.253533 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.322426 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.805058 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b392096e-f869-42e4-b405-995e0adf0568" (UID: "b392096e-f869-42e4-b405-995e0adf0568"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.828361 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b392096e-f869-42e4-b405-995e0adf0568-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.905453 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"b392096e-f869-42e4-b405-995e0adf0568","Type":"ContainerDied","Data":"9fbf89aae9d87e14cf9191748678c192ee8ae11f29b29d65c3ea27f2ac5e9182"} Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.905497 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fbf89aae9d87e14cf9191748678c192ee8ae11f29b29d65c3ea27f2ac5e9182" Jan 21 10:15:48 crc kubenswrapper[5119]: I0121 10:15:48.905892 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 10:15:52 crc kubenswrapper[5119]: I0121 10:15:52.779250 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 10:15:52 crc kubenswrapper[5119]: I0121 10:15:52.780259 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7" containerName="collect-profiles" Jan 21 10:15:52 crc kubenswrapper[5119]: I0121 10:15:52.780276 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7" containerName="collect-profiles" Jan 21 10:15:52 crc kubenswrapper[5119]: I0121 10:15:52.780289 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b392096e-f869-42e4-b405-995e0adf0568" containerName="git-clone" Jan 21 10:15:52 crc kubenswrapper[5119]: I0121 10:15:52.780296 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="b392096e-f869-42e4-b405-995e0adf0568" containerName="git-clone" Jan 21 10:15:52 crc kubenswrapper[5119]: I0121 10:15:52.780304 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b392096e-f869-42e4-b405-995e0adf0568" containerName="docker-build" Jan 21 10:15:52 crc kubenswrapper[5119]: I0121 10:15:52.780311 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="b392096e-f869-42e4-b405-995e0adf0568" containerName="docker-build" Jan 21 10:15:52 crc kubenswrapper[5119]: I0121 10:15:52.780335 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b392096e-f869-42e4-b405-995e0adf0568" containerName="manage-dockerfile" Jan 21 10:15:52 crc kubenswrapper[5119]: I0121 10:15:52.780342 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="b392096e-f869-42e4-b405-995e0adf0568" containerName="manage-dockerfile" Jan 21 10:15:52 crc kubenswrapper[5119]: I0121 10:15:52.780467 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="b392096e-f869-42e4-b405-995e0adf0568" containerName="docker-build" Jan 21 10:15:52 crc kubenswrapper[5119]: I0121 10:15:52.780480 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7" containerName="collect-profiles" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.951563 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.951738 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.953913 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-sys-config\"" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.954287 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.954964 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-global-ca\"" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.959782 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-ca\"" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993273 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993334 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5r46\" (UniqueName: \"kubernetes.io/projected/ab547fc5-85d0-4789-8e8c-1cb97e644efb-kube-api-access-c5r46\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993386 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993421 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993448 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993529 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993580 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993630 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993751 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993773 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993858 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:53 crc kubenswrapper[5119]: I0121 10:15:53.993886 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.094925 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095124 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095196 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095238 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095355 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095390 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095496 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095532 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095629 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095644 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095712 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095751 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c5r46\" (UniqueName: \"kubernetes.io/projected/ab547fc5-85d0-4789-8e8c-1cb97e644efb-kube-api-access-c5r46\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095802 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.095862 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.096184 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.096194 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.096244 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.096302 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.096588 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.096856 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.098018 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.101194 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.101220 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.112671 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5r46\" (UniqueName: \"kubernetes.io/projected/ab547fc5-85d0-4789-8e8c-1cb97e644efb-kube-api-access-c5r46\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.272198 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.504426 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.951537 5119 generic.go:358] "Generic (PLEG): container finished" podID="ab547fc5-85d0-4789-8e8c-1cb97e644efb" containerID="39fedb1d3560a25e86d2ade4af724fcb9c1d1dd443669904555fcc144fceba77" exitCode=0 Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.951672 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ab547fc5-85d0-4789-8e8c-1cb97e644efb","Type":"ContainerDied","Data":"39fedb1d3560a25e86d2ade4af724fcb9c1d1dd443669904555fcc144fceba77"} Jan 21 10:15:54 crc kubenswrapper[5119]: I0121 10:15:54.951715 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ab547fc5-85d0-4789-8e8c-1cb97e644efb","Type":"ContainerStarted","Data":"67eb88051702bbb31be5de4c717effc2b9cae84643a649888c442832bcdb20de"} Jan 21 10:15:55 crc kubenswrapper[5119]: I0121 10:15:55.960016 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ab547fc5-85d0-4789-8e8c-1cb97e644efb","Type":"ContainerStarted","Data":"dd309237ffbf42fe3ea520e4c3e8752c639a229afa65085759f64972a1240e98"} Jan 21 10:15:55 crc kubenswrapper[5119]: I0121 10:15:55.981965 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.98194284 podStartE2EDuration="3.98194284s" podCreationTimestamp="2026-01-21 10:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:15:55.980973773 +0000 UTC m=+1271.649065451" watchObservedRunningTime="2026-01-21 10:15:55.98194284 +0000 UTC m=+1271.650034518" Jan 21 10:16:00 crc kubenswrapper[5119]: I0121 10:16:00.140791 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483176-vl5bq"] Jan 21 10:16:00 crc kubenswrapper[5119]: I0121 10:16:00.751045 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483176-vl5bq"] Jan 21 10:16:00 crc kubenswrapper[5119]: I0121 10:16:00.751804 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483176-vl5bq" Jan 21 10:16:00 crc kubenswrapper[5119]: I0121 10:16:00.755053 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:16:00 crc kubenswrapper[5119]: I0121 10:16:00.756121 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:16:00 crc kubenswrapper[5119]: I0121 10:16:00.756435 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:16:00 crc kubenswrapper[5119]: I0121 10:16:00.881107 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmqsw\" (UniqueName: \"kubernetes.io/projected/4808bb00-e516-4dc0-93b6-1acc311d4824-kube-api-access-fmqsw\") pod \"auto-csr-approver-29483176-vl5bq\" (UID: \"4808bb00-e516-4dc0-93b6-1acc311d4824\") " pod="openshift-infra/auto-csr-approver-29483176-vl5bq" Jan 21 10:16:00 crc kubenswrapper[5119]: I0121 10:16:00.982804 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fmqsw\" (UniqueName: \"kubernetes.io/projected/4808bb00-e516-4dc0-93b6-1acc311d4824-kube-api-access-fmqsw\") pod \"auto-csr-approver-29483176-vl5bq\" (UID: \"4808bb00-e516-4dc0-93b6-1acc311d4824\") " pod="openshift-infra/auto-csr-approver-29483176-vl5bq" Jan 21 10:16:01 crc kubenswrapper[5119]: I0121 10:16:01.013897 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmqsw\" (UniqueName: \"kubernetes.io/projected/4808bb00-e516-4dc0-93b6-1acc311d4824-kube-api-access-fmqsw\") pod \"auto-csr-approver-29483176-vl5bq\" (UID: \"4808bb00-e516-4dc0-93b6-1acc311d4824\") " pod="openshift-infra/auto-csr-approver-29483176-vl5bq" Jan 21 10:16:01 crc kubenswrapper[5119]: I0121 10:16:01.073491 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483176-vl5bq" Jan 21 10:16:01 crc kubenswrapper[5119]: I0121 10:16:01.268946 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483176-vl5bq"] Jan 21 10:16:02 crc kubenswrapper[5119]: I0121 10:16:02.012523 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483176-vl5bq" event={"ID":"4808bb00-e516-4dc0-93b6-1acc311d4824","Type":"ContainerStarted","Data":"3f5bdfa5310b66a40e262c4185cbdf7ce7e3ae53fcee471d4ac285c46a52e878"} Jan 21 10:16:03 crc kubenswrapper[5119]: I0121 10:16:03.020130 5119 generic.go:358] "Generic (PLEG): container finished" podID="4808bb00-e516-4dc0-93b6-1acc311d4824" containerID="d2582b5f1d51b696695d96908c4f45bacb172c32addb9a28eecf8fd1638cba16" exitCode=0 Jan 21 10:16:03 crc kubenswrapper[5119]: I0121 10:16:03.020213 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483176-vl5bq" event={"ID":"4808bb00-e516-4dc0-93b6-1acc311d4824","Type":"ContainerDied","Data":"d2582b5f1d51b696695d96908c4f45bacb172c32addb9a28eecf8fd1638cba16"} Jan 21 10:16:03 crc kubenswrapper[5119]: I0121 10:16:03.645224 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 10:16:03 crc kubenswrapper[5119]: I0121 10:16:03.645506 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="ab547fc5-85d0-4789-8e8c-1cb97e644efb" containerName="docker-build" containerID="cri-o://dd309237ffbf42fe3ea520e4c3e8752c639a229afa65085759f64972a1240e98" gracePeriod=30 Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.028564 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_ab547fc5-85d0-4789-8e8c-1cb97e644efb/docker-build/0.log" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.029098 5119 generic.go:358] "Generic (PLEG): container finished" podID="ab547fc5-85d0-4789-8e8c-1cb97e644efb" containerID="dd309237ffbf42fe3ea520e4c3e8752c639a229afa65085759f64972a1240e98" exitCode=1 Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.029358 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ab547fc5-85d0-4789-8e8c-1cb97e644efb","Type":"ContainerDied","Data":"dd309237ffbf42fe3ea520e4c3e8752c639a229afa65085759f64972a1240e98"} Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.029388 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ab547fc5-85d0-4789-8e8c-1cb97e644efb","Type":"ContainerDied","Data":"67eb88051702bbb31be5de4c717effc2b9cae84643a649888c442832bcdb20de"} Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.029400 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67eb88051702bbb31be5de4c717effc2b9cae84643a649888c442832bcdb20de" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.037810 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_ab547fc5-85d0-4789-8e8c-1cb97e644efb/docker-build/0.log" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.038126 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123213 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-system-configs\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123265 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-ca-bundles\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123300 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-node-pullsecrets\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123372 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123664 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-blob-cache\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123723 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-pull\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123760 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildworkdir\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123784 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-proxy-ca-bundles\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123835 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-root\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123894 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-run\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123946 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5r46\" (UniqueName: \"kubernetes.io/projected/ab547fc5-85d0-4789-8e8c-1cb97e644efb-kube-api-access-c5r46\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.123975 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-push\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.124036 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildcachedir\") pod \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\" (UID: \"ab547fc5-85d0-4789-8e8c-1cb97e644efb\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.124168 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.124219 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.124545 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.124645 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.124663 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.124675 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab547fc5-85d0-4789-8e8c-1cb97e644efb-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.124819 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.124960 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.125005 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.164837 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab547fc5-85d0-4789-8e8c-1cb97e644efb-kube-api-access-c5r46" (OuterVolumeSpecName: "kube-api-access-c5r46") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "kube-api-access-c5r46". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.164889 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.164944 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.214961 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483176-vl5bq" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.217007 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.226499 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.226665 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c5r46\" (UniqueName: \"kubernetes.io/projected/ab547fc5-85d0-4789-8e8c-1cb97e644efb-kube-api-access-c5r46\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.226757 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.226909 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.227078 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.227186 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.227301 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/ab547fc5-85d0-4789-8e8c-1cb97e644efb-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.227430 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab547fc5-85d0-4789-8e8c-1cb97e644efb-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.328768 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmqsw\" (UniqueName: \"kubernetes.io/projected/4808bb00-e516-4dc0-93b6-1acc311d4824-kube-api-access-fmqsw\") pod \"4808bb00-e516-4dc0-93b6-1acc311d4824\" (UID: \"4808bb00-e516-4dc0-93b6-1acc311d4824\") " Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.331846 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4808bb00-e516-4dc0-93b6-1acc311d4824-kube-api-access-fmqsw" (OuterVolumeSpecName: "kube-api-access-fmqsw") pod "4808bb00-e516-4dc0-93b6-1acc311d4824" (UID: "4808bb00-e516-4dc0-93b6-1acc311d4824"). InnerVolumeSpecName "kube-api-access-fmqsw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.430213 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fmqsw\" (UniqueName: \"kubernetes.io/projected/4808bb00-e516-4dc0-93b6-1acc311d4824-kube-api-access-fmqsw\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.506086 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ab547fc5-85d0-4789-8e8c-1cb97e644efb" (UID: "ab547fc5-85d0-4789-8e8c-1cb97e644efb"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:16:04 crc kubenswrapper[5119]: I0121 10:16:04.531158 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ab547fc5-85d0-4789-8e8c-1cb97e644efb-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.037481 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483176-vl5bq" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.037540 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.037556 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483176-vl5bq" event={"ID":"4808bb00-e516-4dc0-93b6-1acc311d4824","Type":"ContainerDied","Data":"3f5bdfa5310b66a40e262c4185cbdf7ce7e3ae53fcee471d4ac285c46a52e878"} Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.037946 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f5bdfa5310b66a40e262c4185cbdf7ce7e3ae53fcee471d4ac285c46a52e878" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.063243 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.075474 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.270939 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483170-g2th8"] Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.276354 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483170-g2th8"] Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.332803 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.333777 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab547fc5-85d0-4789-8e8c-1cb97e644efb" containerName="docker-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.333802 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab547fc5-85d0-4789-8e8c-1cb97e644efb" containerName="docker-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.333830 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4808bb00-e516-4dc0-93b6-1acc311d4824" containerName="oc" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.333840 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4808bb00-e516-4dc0-93b6-1acc311d4824" containerName="oc" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.333858 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab547fc5-85d0-4789-8e8c-1cb97e644efb" containerName="manage-dockerfile" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.333867 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab547fc5-85d0-4789-8e8c-1cb97e644efb" containerName="manage-dockerfile" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.334023 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4808bb00-e516-4dc0-93b6-1acc311d4824" containerName="oc" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.334067 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="ab547fc5-85d0-4789-8e8c-1cb97e644efb" containerName="docker-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.524672 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.524873 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.526930 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-ca\"" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.527053 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-sys-config\"" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.527249 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-global-ca\"" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.529550 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645359 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645405 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645424 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645467 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645534 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tncq\" (UniqueName: \"kubernetes.io/projected/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-kube-api-access-4tncq\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645572 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645615 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645640 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645668 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645770 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645875 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.645914 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747259 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747298 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4tncq\" (UniqueName: \"kubernetes.io/projected/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-kube-api-access-4tncq\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747356 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747503 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747523 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747526 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747623 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747672 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747834 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747873 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747925 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747964 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.747985 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.748149 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.748156 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.748298 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.748339 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.748442 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.748476 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.748712 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.749711 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.753219 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.757651 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.762793 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tncq\" (UniqueName: \"kubernetes.io/projected/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-kube-api-access-4tncq\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:05 crc kubenswrapper[5119]: I0121 10:16:05.840443 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:16:06 crc kubenswrapper[5119]: I0121 10:16:06.302218 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 21 10:16:06 crc kubenswrapper[5119]: I0121 10:16:06.597649 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873" path="/var/lib/kubelet/pods/8ad0ffb1-b9b5-48c1-9cbf-4aaca690c873/volumes" Jan 21 10:16:06 crc kubenswrapper[5119]: I0121 10:16:06.598655 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab547fc5-85d0-4789-8e8c-1cb97e644efb" path="/var/lib/kubelet/pods/ab547fc5-85d0-4789-8e8c-1cb97e644efb/volumes" Jan 21 10:16:07 crc kubenswrapper[5119]: I0121 10:16:07.050252 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7","Type":"ContainerStarted","Data":"8d7a12ad585e72b4558180c0fc9136019980b2479bcfd8295ef24553276274f3"} Jan 21 10:16:07 crc kubenswrapper[5119]: I0121 10:16:07.050304 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7","Type":"ContainerStarted","Data":"f9ad5e6eecaf4b73011f24a8c64f4f416c3c5c16c5598156ef01adca97db0872"} Jan 21 10:16:08 crc kubenswrapper[5119]: I0121 10:16:08.058463 5119 generic.go:358] "Generic (PLEG): container finished" podID="dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" containerID="8d7a12ad585e72b4558180c0fc9136019980b2479bcfd8295ef24553276274f3" exitCode=0 Jan 21 10:16:08 crc kubenswrapper[5119]: I0121 10:16:08.058551 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7","Type":"ContainerDied","Data":"8d7a12ad585e72b4558180c0fc9136019980b2479bcfd8295ef24553276274f3"} Jan 21 10:16:10 crc kubenswrapper[5119]: I0121 10:16:10.075212 5119 generic.go:358] "Generic (PLEG): container finished" podID="dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" containerID="117a8391ab439d8c897ecb6a8542d0f3cbabe75a1fa916bfa2cc2b9408c69653" exitCode=0 Jan 21 10:16:10 crc kubenswrapper[5119]: I0121 10:16:10.075398 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7","Type":"ContainerDied","Data":"117a8391ab439d8c897ecb6a8542d0f3cbabe75a1fa916bfa2cc2b9408c69653"} Jan 21 10:16:10 crc kubenswrapper[5119]: I0121 10:16:10.112993 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7/manage-dockerfile/0.log" Jan 21 10:16:11 crc kubenswrapper[5119]: I0121 10:16:11.083003 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7","Type":"ContainerStarted","Data":"1633e70bf2db5e0b5b72a085fb3a856b8f786c16973cf6e993b666e082e2db26"} Jan 21 10:16:11 crc kubenswrapper[5119]: I0121 10:16:11.119258 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=6.119237955 podStartE2EDuration="6.119237955s" podCreationTimestamp="2026-01-21 10:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:16:11.10216494 +0000 UTC m=+1286.770256638" watchObservedRunningTime="2026-01-21 10:16:11.119237955 +0000 UTC m=+1286.787329633" Jan 21 10:16:19 crc kubenswrapper[5119]: I0121 10:16:19.919041 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:16:19 crc kubenswrapper[5119]: I0121 10:16:19.919478 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:16:30 crc kubenswrapper[5119]: I0121 10:16:30.846216 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-pc5lh" podUID="eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321" containerName="registry-server" probeResult="failure" output=< Jan 21 10:16:30 crc kubenswrapper[5119]: timeout: failed to connect service ":50051" within 1s Jan 21 10:16:30 crc kubenswrapper[5119]: > Jan 21 10:16:47 crc kubenswrapper[5119]: I0121 10:16:47.555163 5119 scope.go:117] "RemoveContainer" containerID="e5a92ef9fec23f592cd3391f23b5319cd41d024ae5accd50aabc2c556ddcb006" Jan 21 10:16:49 crc kubenswrapper[5119]: I0121 10:16:49.919075 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:16:49 crc kubenswrapper[5119]: I0121 10:16:49.920162 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:17:19 crc kubenswrapper[5119]: I0121 10:17:19.918829 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:17:19 crc kubenswrapper[5119]: I0121 10:17:19.919329 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:17:19 crc kubenswrapper[5119]: I0121 10:17:19.919378 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:17:19 crc kubenswrapper[5119]: I0121 10:17:19.919846 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1293b9d5697f64c75dfcb0e9afb6682f3461979e9927eeb7658215e6f071a1d"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:17:19 crc kubenswrapper[5119]: I0121 10:17:19.919900 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://e1293b9d5697f64c75dfcb0e9afb6682f3461979e9927eeb7658215e6f071a1d" gracePeriod=600 Jan 21 10:17:26 crc kubenswrapper[5119]: I0121 10:17:26.441285 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="e1293b9d5697f64c75dfcb0e9afb6682f3461979e9927eeb7658215e6f071a1d" exitCode=0 Jan 21 10:17:26 crc kubenswrapper[5119]: I0121 10:17:26.441422 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"e1293b9d5697f64c75dfcb0e9afb6682f3461979e9927eeb7658215e6f071a1d"} Jan 21 10:17:26 crc kubenswrapper[5119]: I0121 10:17:26.442038 5119 scope.go:117] "RemoveContainer" containerID="639f312973ca7ed8a8e84c76403e2b53399a57adb4ec6e14a566fd4af25f3c2b" Jan 21 10:17:31 crc kubenswrapper[5119]: I0121 10:17:31.500954 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"b6ab9884520f29aaaa049d566e0f03918d804eb56ab65d3fab45a8a4f5ef9ba3"} Jan 21 10:17:31 crc kubenswrapper[5119]: I0121 10:17:31.962272 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ghv9j"] Jan 21 10:17:33 crc kubenswrapper[5119]: I0121 10:17:33.787070 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:33 crc kubenswrapper[5119]: I0121 10:17:33.805332 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghv9j"] Jan 21 10:17:33 crc kubenswrapper[5119]: I0121 10:17:33.922021 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-catalog-content\") pod \"certified-operators-ghv9j\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:33 crc kubenswrapper[5119]: I0121 10:17:33.922063 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-utilities\") pod \"certified-operators-ghv9j\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:33 crc kubenswrapper[5119]: I0121 10:17:33.922198 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmx7x\" (UniqueName: \"kubernetes.io/projected/ca5c5baa-18f7-4678-a5a0-82bb1df40158-kube-api-access-nmx7x\") pod \"certified-operators-ghv9j\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:34 crc kubenswrapper[5119]: I0121 10:17:34.023977 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-catalog-content\") pod \"certified-operators-ghv9j\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:34 crc kubenswrapper[5119]: I0121 10:17:34.024202 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-utilities\") pod \"certified-operators-ghv9j\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:34 crc kubenswrapper[5119]: I0121 10:17:34.024468 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nmx7x\" (UniqueName: \"kubernetes.io/projected/ca5c5baa-18f7-4678-a5a0-82bb1df40158-kube-api-access-nmx7x\") pod \"certified-operators-ghv9j\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:34 crc kubenswrapper[5119]: I0121 10:17:34.024624 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-catalog-content\") pod \"certified-operators-ghv9j\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:34 crc kubenswrapper[5119]: I0121 10:17:34.024666 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-utilities\") pod \"certified-operators-ghv9j\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:34 crc kubenswrapper[5119]: I0121 10:17:34.053305 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmx7x\" (UniqueName: \"kubernetes.io/projected/ca5c5baa-18f7-4678-a5a0-82bb1df40158-kube-api-access-nmx7x\") pod \"certified-operators-ghv9j\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:34 crc kubenswrapper[5119]: I0121 10:17:34.123109 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:34 crc kubenswrapper[5119]: I0121 10:17:34.550556 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghv9j"] Jan 21 10:17:34 crc kubenswrapper[5119]: W0121 10:17:34.553869 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca5c5baa_18f7_4678_a5a0_82bb1df40158.slice/crio-e10efa51135a791407b4477bf71ffc2bbfede523f07ce8e1e656814d0b23f6bd WatchSource:0}: Error finding container e10efa51135a791407b4477bf71ffc2bbfede523f07ce8e1e656814d0b23f6bd: Status 404 returned error can't find the container with id e10efa51135a791407b4477bf71ffc2bbfede523f07ce8e1e656814d0b23f6bd Jan 21 10:17:35 crc kubenswrapper[5119]: I0121 10:17:35.528478 5119 generic.go:358] "Generic (PLEG): container finished" podID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" containerID="dcadc3c3d1702f41eab8987251bceff6f6d55e53ed1a556208d30fe6c6587bb9" exitCode=0 Jan 21 10:17:35 crc kubenswrapper[5119]: I0121 10:17:35.528575 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghv9j" event={"ID":"ca5c5baa-18f7-4678-a5a0-82bb1df40158","Type":"ContainerDied","Data":"dcadc3c3d1702f41eab8987251bceff6f6d55e53ed1a556208d30fe6c6587bb9"} Jan 21 10:17:35 crc kubenswrapper[5119]: I0121 10:17:35.528961 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghv9j" event={"ID":"ca5c5baa-18f7-4678-a5a0-82bb1df40158","Type":"ContainerStarted","Data":"e10efa51135a791407b4477bf71ffc2bbfede523f07ce8e1e656814d0b23f6bd"} Jan 21 10:17:42 crc kubenswrapper[5119]: I0121 10:17:42.574882 5119 generic.go:358] "Generic (PLEG): container finished" podID="dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" containerID="1633e70bf2db5e0b5b72a085fb3a856b8f786c16973cf6e993b666e082e2db26" exitCode=0 Jan 21 10:17:42 crc kubenswrapper[5119]: I0121 10:17:42.574994 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7","Type":"ContainerDied","Data":"1633e70bf2db5e0b5b72a085fb3a856b8f786c16973cf6e993b666e082e2db26"} Jan 21 10:17:42 crc kubenswrapper[5119]: I0121 10:17:42.577390 5119 generic.go:358] "Generic (PLEG): container finished" podID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" containerID="855ed08dd4125509af04599a57bb12fb9beea2488863f78229fffb1f848a1871" exitCode=0 Jan 21 10:17:42 crc kubenswrapper[5119]: I0121 10:17:42.577478 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghv9j" event={"ID":"ca5c5baa-18f7-4678-a5a0-82bb1df40158","Type":"ContainerDied","Data":"855ed08dd4125509af04599a57bb12fb9beea2488863f78229fffb1f848a1871"} Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.586146 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghv9j" event={"ID":"ca5c5baa-18f7-4678-a5a0-82bb1df40158","Type":"ContainerStarted","Data":"6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82"} Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.822376 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.957661 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildworkdir\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.957782 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-root\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.957807 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-system-configs\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.957852 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-blob-cache\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.958416 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.958859 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.959286 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.961729 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-ca-bundles\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.961877 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-node-pullsecrets\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.961915 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-pull\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.961947 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.961987 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tncq\" (UniqueName: \"kubernetes.io/projected/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-kube-api-access-4tncq\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.962046 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildcachedir\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.962112 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-run\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.962166 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-push\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.962228 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-proxy-ca-bundles\") pod \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\" (UID: \"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7\") " Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.962401 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.962833 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.962853 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.962867 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.962881 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.962893 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.963126 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.963734 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.967985 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-kube-api-access-4tncq" (OuterVolumeSpecName: "kube-api-access-4tncq") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "kube-api-access-4tncq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.967999 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:17:43 crc kubenswrapper[5119]: I0121 10:17:43.975387 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.067123 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.067780 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.067795 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.067807 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4tncq\" (UniqueName: \"kubernetes.io/projected/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-kube-api-access-4tncq\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.067816 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.067825 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.067833 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.598692 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.606792 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7","Type":"ContainerDied","Data":"f9ad5e6eecaf4b73011f24a8c64f4f416c3c5c16c5598156ef01adca97db0872"} Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.606854 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9ad5e6eecaf4b73011f24a8c64f4f416c3c5c16c5598156ef01adca97db0872" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.621857 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ghv9j" podStartSLOduration=8.756699738 podStartE2EDuration="13.621838465s" podCreationTimestamp="2026-01-21 10:17:31 +0000 UTC" firstStartedPulling="2026-01-21 10:17:36.539100321 +0000 UTC m=+1372.207191999" lastFinishedPulling="2026-01-21 10:17:41.404239048 +0000 UTC m=+1377.072330726" observedRunningTime="2026-01-21 10:17:44.617753205 +0000 UTC m=+1380.285844913" watchObservedRunningTime="2026-01-21 10:17:44.621838465 +0000 UTC m=+1380.289930143" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.881776 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" (UID: "dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:17:44 crc kubenswrapper[5119]: I0121 10:17:44.981647 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.027131 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.030009 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" containerName="docker-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.030167 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" containerName="docker-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.030296 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" containerName="manage-dockerfile" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.030409 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" containerName="manage-dockerfile" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.030590 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" containerName="git-clone" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.030744 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" containerName="git-clone" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.031179 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7" containerName="docker-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.375838 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.375883 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.375912 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.376022 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.376568 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.379139 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-global-ca\"" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.379883 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-sys-config\"" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.379932 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-ca\"" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.381208 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.419365 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.419404 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.419428 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.419528 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.419570 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.419538 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.419734 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.419775 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.419819 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.419897 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smlff\" (UniqueName: \"kubernetes.io/projected/421bc4d8-43cf-4a39-9731-da166a2d38eb-kube-api-access-smlff\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.420151 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.420251 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.420290 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.521472 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.521858 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.521890 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.521912 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.521932 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.521962 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.521981 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.522177 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.522241 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.522287 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.522316 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.522364 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.522340 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-smlff\" (UniqueName: \"kubernetes.io/projected/421bc4d8-43cf-4a39-9731-da166a2d38eb-kube-api-access-smlff\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.522434 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.522477 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.522516 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.522560 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.522799 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.523140 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.523250 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.523501 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.530363 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.530430 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.538971 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-smlff\" (UniqueName: \"kubernetes.io/projected/421bc4d8-43cf-4a39-9731-da166a2d38eb-kube-api-access-smlff\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.611632 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghv9j"] Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.705402 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:17:54 crc kubenswrapper[5119]: I0121 10:17:54.909132 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 21 10:17:55 crc kubenswrapper[5119]: I0121 10:17:55.663794 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"421bc4d8-43cf-4a39-9731-da166a2d38eb","Type":"ContainerStarted","Data":"e8bc9311eb9a0e322a1b4f0c55d121e39ccfb06bd4daa2aa0959046bb0726b35"} Jan 21 10:17:55 crc kubenswrapper[5119]: I0121 10:17:55.664037 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ghv9j" podUID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" containerName="registry-server" containerID="cri-o://6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82" gracePeriod=2 Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.520647 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.652713 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-utilities\") pod \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.652801 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmx7x\" (UniqueName: \"kubernetes.io/projected/ca5c5baa-18f7-4678-a5a0-82bb1df40158-kube-api-access-nmx7x\") pod \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.652829 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-catalog-content\") pod \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\" (UID: \"ca5c5baa-18f7-4678-a5a0-82bb1df40158\") " Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.654578 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-utilities" (OuterVolumeSpecName: "utilities") pod "ca5c5baa-18f7-4678-a5a0-82bb1df40158" (UID: "ca5c5baa-18f7-4678-a5a0-82bb1df40158"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.659313 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca5c5baa-18f7-4678-a5a0-82bb1df40158-kube-api-access-nmx7x" (OuterVolumeSpecName: "kube-api-access-nmx7x") pod "ca5c5baa-18f7-4678-a5a0-82bb1df40158" (UID: "ca5c5baa-18f7-4678-a5a0-82bb1df40158"). InnerVolumeSpecName "kube-api-access-nmx7x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.672653 5119 generic.go:358] "Generic (PLEG): container finished" podID="421bc4d8-43cf-4a39-9731-da166a2d38eb" containerID="99f70033747c3cbd096e7e93ad7954653e317507483059290e4b8f9421f8a1c8" exitCode=0 Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.672872 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"421bc4d8-43cf-4a39-9731-da166a2d38eb","Type":"ContainerDied","Data":"99f70033747c3cbd096e7e93ad7954653e317507483059290e4b8f9421f8a1c8"} Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.680169 5119 generic.go:358] "Generic (PLEG): container finished" podID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" containerID="6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82" exitCode=0 Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.680290 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghv9j" event={"ID":"ca5c5baa-18f7-4678-a5a0-82bb1df40158","Type":"ContainerDied","Data":"6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82"} Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.680317 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghv9j" event={"ID":"ca5c5baa-18f7-4678-a5a0-82bb1df40158","Type":"ContainerDied","Data":"e10efa51135a791407b4477bf71ffc2bbfede523f07ce8e1e656814d0b23f6bd"} Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.680332 5119 scope.go:117] "RemoveContainer" containerID="6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.680491 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghv9j" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.689327 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca5c5baa-18f7-4678-a5a0-82bb1df40158" (UID: "ca5c5baa-18f7-4678-a5a0-82bb1df40158"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.704333 5119 scope.go:117] "RemoveContainer" containerID="855ed08dd4125509af04599a57bb12fb9beea2488863f78229fffb1f848a1871" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.719543 5119 scope.go:117] "RemoveContainer" containerID="dcadc3c3d1702f41eab8987251bceff6f6d55e53ed1a556208d30fe6c6587bb9" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.736335 5119 scope.go:117] "RemoveContainer" containerID="6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82" Jan 21 10:17:56 crc kubenswrapper[5119]: E0121 10:17:56.736772 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82\": container with ID starting with 6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82 not found: ID does not exist" containerID="6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.736819 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82"} err="failed to get container status \"6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82\": rpc error: code = NotFound desc = could not find container \"6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82\": container with ID starting with 6d0c32dd1e8fd8337c9174744abc121d79c09ba31193b2d88d9d9d1e99adda82 not found: ID does not exist" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.736848 5119 scope.go:117] "RemoveContainer" containerID="855ed08dd4125509af04599a57bb12fb9beea2488863f78229fffb1f848a1871" Jan 21 10:17:56 crc kubenswrapper[5119]: E0121 10:17:56.737192 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"855ed08dd4125509af04599a57bb12fb9beea2488863f78229fffb1f848a1871\": container with ID starting with 855ed08dd4125509af04599a57bb12fb9beea2488863f78229fffb1f848a1871 not found: ID does not exist" containerID="855ed08dd4125509af04599a57bb12fb9beea2488863f78229fffb1f848a1871" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.737235 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"855ed08dd4125509af04599a57bb12fb9beea2488863f78229fffb1f848a1871"} err="failed to get container status \"855ed08dd4125509af04599a57bb12fb9beea2488863f78229fffb1f848a1871\": rpc error: code = NotFound desc = could not find container \"855ed08dd4125509af04599a57bb12fb9beea2488863f78229fffb1f848a1871\": container with ID starting with 855ed08dd4125509af04599a57bb12fb9beea2488863f78229fffb1f848a1871 not found: ID does not exist" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.737255 5119 scope.go:117] "RemoveContainer" containerID="dcadc3c3d1702f41eab8987251bceff6f6d55e53ed1a556208d30fe6c6587bb9" Jan 21 10:17:56 crc kubenswrapper[5119]: E0121 10:17:56.737586 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcadc3c3d1702f41eab8987251bceff6f6d55e53ed1a556208d30fe6c6587bb9\": container with ID starting with dcadc3c3d1702f41eab8987251bceff6f6d55e53ed1a556208d30fe6c6587bb9 not found: ID does not exist" containerID="dcadc3c3d1702f41eab8987251bceff6f6d55e53ed1a556208d30fe6c6587bb9" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.737623 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcadc3c3d1702f41eab8987251bceff6f6d55e53ed1a556208d30fe6c6587bb9"} err="failed to get container status \"dcadc3c3d1702f41eab8987251bceff6f6d55e53ed1a556208d30fe6c6587bb9\": rpc error: code = NotFound desc = could not find container \"dcadc3c3d1702f41eab8987251bceff6f6d55e53ed1a556208d30fe6c6587bb9\": container with ID starting with dcadc3c3d1702f41eab8987251bceff6f6d55e53ed1a556208d30fe6c6587bb9 not found: ID does not exist" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.754498 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.754521 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmx7x\" (UniqueName: \"kubernetes.io/projected/ca5c5baa-18f7-4678-a5a0-82bb1df40158-kube-api-access-nmx7x\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:56 crc kubenswrapper[5119]: I0121 10:17:56.754532 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5c5baa-18f7-4678-a5a0-82bb1df40158-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:57 crc kubenswrapper[5119]: I0121 10:17:57.020709 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghv9j"] Jan 21 10:17:57 crc kubenswrapper[5119]: I0121 10:17:57.028126 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ghv9j"] Jan 21 10:17:57 crc kubenswrapper[5119]: I0121 10:17:57.690389 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"421bc4d8-43cf-4a39-9731-da166a2d38eb","Type":"ContainerStarted","Data":"5400b7852c35fffafcc91609a8c95507ff70393dbd7bd92b0682bb8bb9daf859"} Jan 21 10:17:57 crc kubenswrapper[5119]: I0121 10:17:57.713183 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-bundle-1-build" podStartSLOduration=3.713164075 podStartE2EDuration="3.713164075s" podCreationTimestamp="2026-01-21 10:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:17:57.709624418 +0000 UTC m=+1393.377716096" watchObservedRunningTime="2026-01-21 10:17:57.713164075 +0000 UTC m=+1393.381255753" Jan 21 10:17:58 crc kubenswrapper[5119]: I0121 10:17:58.598398 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" path="/var/lib/kubelet/pods/ca5c5baa-18f7-4678-a5a0-82bb1df40158/volumes" Jan 21 10:17:58 crc kubenswrapper[5119]: I0121 10:17:58.700642 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_421bc4d8-43cf-4a39-9731-da166a2d38eb/docker-build/0.log" Jan 21 10:17:58 crc kubenswrapper[5119]: I0121 10:17:58.700959 5119 generic.go:358] "Generic (PLEG): container finished" podID="421bc4d8-43cf-4a39-9731-da166a2d38eb" containerID="5400b7852c35fffafcc91609a8c95507ff70393dbd7bd92b0682bb8bb9daf859" exitCode=1 Jan 21 10:17:58 crc kubenswrapper[5119]: I0121 10:17:58.701094 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"421bc4d8-43cf-4a39-9731-da166a2d38eb","Type":"ContainerDied","Data":"5400b7852c35fffafcc91609a8c95507ff70393dbd7bd92b0682bb8bb9daf859"} Jan 21 10:17:59 crc kubenswrapper[5119]: I0121 10:17:59.955623 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_421bc4d8-43cf-4a39-9731-da166a2d38eb/docker-build/0.log" Jan 21 10:17:59 crc kubenswrapper[5119]: I0121 10:17:59.956056 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.104904 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-run\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.104980 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildworkdir\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.105033 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-system-configs\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.105074 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-ca-bundles\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.105099 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-root\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.105155 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-pull\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.105183 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-blob-cache\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.105205 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-proxy-ca-bundles\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.105294 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildcachedir\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.105320 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-node-pullsecrets\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.105380 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smlff\" (UniqueName: \"kubernetes.io/projected/421bc4d8-43cf-4a39-9731-da166a2d38eb-kube-api-access-smlff\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.105511 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-push\") pod \"421bc4d8-43cf-4a39-9731-da166a2d38eb\" (UID: \"421bc4d8-43cf-4a39-9731-da166a2d38eb\") " Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.106218 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.106680 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.106711 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.107178 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.107274 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.107380 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.107464 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.107754 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.107985 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.112045 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.112056 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421bc4d8-43cf-4a39-9731-da166a2d38eb-kube-api-access-smlff" (OuterVolumeSpecName: "kube-api-access-smlff") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "kube-api-access-smlff". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.112192 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "421bc4d8-43cf-4a39-9731-da166a2d38eb" (UID: "421bc4d8-43cf-4a39-9731-da166a2d38eb"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.135152 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483178-q5fxc"] Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136363 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="421bc4d8-43cf-4a39-9731-da166a2d38eb" containerName="docker-build" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136381 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="421bc4d8-43cf-4a39-9731-da166a2d38eb" containerName="docker-build" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136415 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" containerName="extract-utilities" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136421 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" containerName="extract-utilities" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136438 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" containerName="extract-content" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136445 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" containerName="extract-content" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136455 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" containerName="registry-server" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136461 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" containerName="registry-server" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136505 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="421bc4d8-43cf-4a39-9731-da166a2d38eb" containerName="manage-dockerfile" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136510 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="421bc4d8-43cf-4a39-9731-da166a2d38eb" containerName="manage-dockerfile" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136925 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="421bc4d8-43cf-4a39-9731-da166a2d38eb" containerName="docker-build" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.136943 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="ca5c5baa-18f7-4678-a5a0-82bb1df40158" containerName="registry-server" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.141801 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483178-q5fxc" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.144298 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.144364 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.144636 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.145712 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483178-q5fxc"] Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207084 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmk5g\" (UniqueName: \"kubernetes.io/projected/e867fbb0-fbdc-4af4-b712-c6107d53366e-kube-api-access-tmk5g\") pod \"auto-csr-approver-29483178-q5fxc\" (UID: \"e867fbb0-fbdc-4af4-b712-c6107d53366e\") " pod="openshift-infra/auto-csr-approver-29483178-q5fxc" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207173 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207185 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207196 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207205 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207214 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207222 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207230 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/421bc4d8-43cf-4a39-9731-da166a2d38eb-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207238 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207246 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/421bc4d8-43cf-4a39-9731-da166a2d38eb-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207255 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207263 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/421bc4d8-43cf-4a39-9731-da166a2d38eb-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.207271 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-smlff\" (UniqueName: \"kubernetes.io/projected/421bc4d8-43cf-4a39-9731-da166a2d38eb-kube-api-access-smlff\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.307964 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmk5g\" (UniqueName: \"kubernetes.io/projected/e867fbb0-fbdc-4af4-b712-c6107d53366e-kube-api-access-tmk5g\") pod \"auto-csr-approver-29483178-q5fxc\" (UID: \"e867fbb0-fbdc-4af4-b712-c6107d53366e\") " pod="openshift-infra/auto-csr-approver-29483178-q5fxc" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.325753 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmk5g\" (UniqueName: \"kubernetes.io/projected/e867fbb0-fbdc-4af4-b712-c6107d53366e-kube-api-access-tmk5g\") pod \"auto-csr-approver-29483178-q5fxc\" (UID: \"e867fbb0-fbdc-4af4-b712-c6107d53366e\") " pod="openshift-infra/auto-csr-approver-29483178-q5fxc" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.487237 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483178-q5fxc" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.661669 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483178-q5fxc"] Jan 21 10:18:00 crc kubenswrapper[5119]: W0121 10:18:00.666793 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode867fbb0_fbdc_4af4_b712_c6107d53366e.slice/crio-49774d4efd484f7d216e8ee3f4c37173857462d156b3eeca311fd72b9efe9341 WatchSource:0}: Error finding container 49774d4efd484f7d216e8ee3f4c37173857462d156b3eeca311fd72b9efe9341: Status 404 returned error can't find the container with id 49774d4efd484f7d216e8ee3f4c37173857462d156b3eeca311fd72b9efe9341 Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.728064 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_421bc4d8-43cf-4a39-9731-da166a2d38eb/docker-build/0.log" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.728646 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"421bc4d8-43cf-4a39-9731-da166a2d38eb","Type":"ContainerDied","Data":"e8bc9311eb9a0e322a1b4f0c55d121e39ccfb06bd4daa2aa0959046bb0726b35"} Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.728684 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.728709 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8bc9311eb9a0e322a1b4f0c55d121e39ccfb06bd4daa2aa0959046bb0726b35" Jan 21 10:18:00 crc kubenswrapper[5119]: I0121 10:18:00.730442 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483178-q5fxc" event={"ID":"e867fbb0-fbdc-4af4-b712-c6107d53366e","Type":"ContainerStarted","Data":"49774d4efd484f7d216e8ee3f4c37173857462d156b3eeca311fd72b9efe9341"} Jan 21 10:18:02 crc kubenswrapper[5119]: I0121 10:18:02.746140 5119 generic.go:358] "Generic (PLEG): container finished" podID="e867fbb0-fbdc-4af4-b712-c6107d53366e" containerID="66b5b63cf4fa59d9b8d013c8cda68a8ee0daeffca56cdd2b879198cfb2c9c8c9" exitCode=0 Jan 21 10:18:02 crc kubenswrapper[5119]: I0121 10:18:02.746223 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483178-q5fxc" event={"ID":"e867fbb0-fbdc-4af4-b712-c6107d53366e","Type":"ContainerDied","Data":"66b5b63cf4fa59d9b8d013c8cda68a8ee0daeffca56cdd2b879198cfb2c9c8c9"} Jan 21 10:18:03 crc kubenswrapper[5119]: I0121 10:18:03.989430 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483178-q5fxc" Jan 21 10:18:04 crc kubenswrapper[5119]: I0121 10:18:04.161421 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmk5g\" (UniqueName: \"kubernetes.io/projected/e867fbb0-fbdc-4af4-b712-c6107d53366e-kube-api-access-tmk5g\") pod \"e867fbb0-fbdc-4af4-b712-c6107d53366e\" (UID: \"e867fbb0-fbdc-4af4-b712-c6107d53366e\") " Jan 21 10:18:04 crc kubenswrapper[5119]: I0121 10:18:04.167995 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e867fbb0-fbdc-4af4-b712-c6107d53366e-kube-api-access-tmk5g" (OuterVolumeSpecName: "kube-api-access-tmk5g") pod "e867fbb0-fbdc-4af4-b712-c6107d53366e" (UID: "e867fbb0-fbdc-4af4-b712-c6107d53366e"). InnerVolumeSpecName "kube-api-access-tmk5g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:18:04 crc kubenswrapper[5119]: I0121 10:18:04.262689 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tmk5g\" (UniqueName: \"kubernetes.io/projected/e867fbb0-fbdc-4af4-b712-c6107d53366e-kube-api-access-tmk5g\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:04 crc kubenswrapper[5119]: I0121 10:18:04.531385 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f9bfc"] Jan 21 10:18:04 crc kubenswrapper[5119]: I0121 10:18:04.532937 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e867fbb0-fbdc-4af4-b712-c6107d53366e" containerName="oc" Jan 21 10:18:04 crc kubenswrapper[5119]: I0121 10:18:04.532955 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e867fbb0-fbdc-4af4-b712-c6107d53366e" containerName="oc" Jan 21 10:18:04 crc kubenswrapper[5119]: I0121 10:18:04.533051 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e867fbb0-fbdc-4af4-b712-c6107d53366e" containerName="oc" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.857395 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483178-q5fxc" event={"ID":"e867fbb0-fbdc-4af4-b712-c6107d53366e","Type":"ContainerDied","Data":"49774d4efd484f7d216e8ee3f4c37173857462d156b3eeca311fd72b9efe9341"} Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.857803 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49774d4efd484f7d216e8ee3f4c37173857462d156b3eeca311fd72b9efe9341" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.857829 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9bfc"] Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.857852 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.857873 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.857625 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.857519 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483178-q5fxc" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.870931 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="421bc4d8-43cf-4a39-9731-da166a2d38eb" path="/var/lib/kubelet/pods/421bc4d8-43cf-4a39-9731-da166a2d38eb/volumes" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.872951 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483172-9blg4"] Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.872999 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483172-9blg4"] Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.893317 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8smrk\" (UniqueName: \"kubernetes.io/projected/6983b04a-0517-41e1-82ae-8978727b6e3d-kube-api-access-8smrk\") pod \"community-operators-f9bfc\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.893493 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-utilities\") pod \"community-operators-f9bfc\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.894271 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-catalog-content\") pod \"community-operators-f9bfc\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.995709 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-utilities\") pod \"community-operators-f9bfc\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.995834 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-catalog-content\") pod \"community-operators-f9bfc\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.995900 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8smrk\" (UniqueName: \"kubernetes.io/projected/6983b04a-0517-41e1-82ae-8978727b6e3d-kube-api-access-8smrk\") pod \"community-operators-f9bfc\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.996644 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-catalog-content\") pod \"community-operators-f9bfc\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:05 crc kubenswrapper[5119]: I0121 10:18:05.996834 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-utilities\") pod \"community-operators-f9bfc\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.018239 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8smrk\" (UniqueName: \"kubernetes.io/projected/6983b04a-0517-41e1-82ae-8978727b6e3d-kube-api-access-8smrk\") pod \"community-operators-f9bfc\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.130991 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.140291 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.141587 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.142758 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-sys-config\"" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.142787 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-global-ca\"" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.143137 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.143735 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-ca\"" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.185408 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.198371 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.198620 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.198754 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljc5h\" (UniqueName: \"kubernetes.io/projected/1154d85d-dc29-49ea-9f9d-e5264f980b9c-kube-api-access-ljc5h\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.198850 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.198925 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.198998 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.199062 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.199189 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.199295 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.199395 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.199506 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.199663 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.301073 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljc5h\" (UniqueName: \"kubernetes.io/projected/1154d85d-dc29-49ea-9f9d-e5264f980b9c-kube-api-access-ljc5h\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.301420 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.301451 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.301474 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.301493 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.301518 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.301561 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.301587 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.301639 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.302004 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.302132 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.302250 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.302344 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.302372 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.302345 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.302479 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.302688 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.302786 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.302830 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.303056 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.303179 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.307121 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.307528 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.321187 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljc5h\" (UniqueName: \"kubernetes.io/projected/1154d85d-dc29-49ea-9f9d-e5264f980b9c-kube-api-access-ljc5h\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.364274 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9bfc"] Jan 21 10:18:06 crc kubenswrapper[5119]: W0121 10:18:06.368447 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6983b04a_0517_41e1_82ae_8978727b6e3d.slice/crio-c2dd16dd7d336aafd93efca9c3e9c6c5b44e87f9bedef9193eb45262978335e5 WatchSource:0}: Error finding container c2dd16dd7d336aafd93efca9c3e9c6c5b44e87f9bedef9193eb45262978335e5: Status 404 returned error can't find the container with id c2dd16dd7d336aafd93efca9c3e9c6c5b44e87f9bedef9193eb45262978335e5 Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.463498 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.598790 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b6d1afc-21f7-4fdc-82af-808bff8dcc9f" path="/var/lib/kubelet/pods/0b6d1afc-21f7-4fdc-82af-808bff8dcc9f/volumes" Jan 21 10:18:06 crc kubenswrapper[5119]: W0121 10:18:06.718033 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1154d85d_dc29_49ea_9f9d_e5264f980b9c.slice/crio-8411c1aa4add88d6a89498d45e9f6bf8f1303012cab67a44b29927ffc06df5b7 WatchSource:0}: Error finding container 8411c1aa4add88d6a89498d45e9f6bf8f1303012cab67a44b29927ffc06df5b7: Status 404 returned error can't find the container with id 8411c1aa4add88d6a89498d45e9f6bf8f1303012cab67a44b29927ffc06df5b7 Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.718688 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.777538 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"1154d85d-dc29-49ea-9f9d-e5264f980b9c","Type":"ContainerStarted","Data":"8411c1aa4add88d6a89498d45e9f6bf8f1303012cab67a44b29927ffc06df5b7"} Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.780743 5119 generic.go:358] "Generic (PLEG): container finished" podID="6983b04a-0517-41e1-82ae-8978727b6e3d" containerID="49a9d33afda69678172e2d9ebf5dc74251623a9a3f74442d608a2f2b328bf9be" exitCode=0 Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.780851 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9bfc" event={"ID":"6983b04a-0517-41e1-82ae-8978727b6e3d","Type":"ContainerDied","Data":"49a9d33afda69678172e2d9ebf5dc74251623a9a3f74442d608a2f2b328bf9be"} Jan 21 10:18:06 crc kubenswrapper[5119]: I0121 10:18:06.780880 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9bfc" event={"ID":"6983b04a-0517-41e1-82ae-8978727b6e3d","Type":"ContainerStarted","Data":"c2dd16dd7d336aafd93efca9c3e9c6c5b44e87f9bedef9193eb45262978335e5"} Jan 21 10:18:07 crc kubenswrapper[5119]: I0121 10:18:07.791141 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"1154d85d-dc29-49ea-9f9d-e5264f980b9c","Type":"ContainerStarted","Data":"13836c1981f755c08a5be24d8cf7b275ea4554039533960815c0aa25e27f16e7"} Jan 21 10:18:08 crc kubenswrapper[5119]: I0121 10:18:08.799107 5119 generic.go:358] "Generic (PLEG): container finished" podID="6983b04a-0517-41e1-82ae-8978727b6e3d" containerID="184781fc6c14afd38f345fc3e5d093a35d2e99f0393ad5ae610880621c027bb6" exitCode=0 Jan 21 10:18:08 crc kubenswrapper[5119]: I0121 10:18:08.799230 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9bfc" event={"ID":"6983b04a-0517-41e1-82ae-8978727b6e3d","Type":"ContainerDied","Data":"184781fc6c14afd38f345fc3e5d093a35d2e99f0393ad5ae610880621c027bb6"} Jan 21 10:18:08 crc kubenswrapper[5119]: I0121 10:18:08.801336 5119 generic.go:358] "Generic (PLEG): container finished" podID="1154d85d-dc29-49ea-9f9d-e5264f980b9c" containerID="13836c1981f755c08a5be24d8cf7b275ea4554039533960815c0aa25e27f16e7" exitCode=0 Jan 21 10:18:08 crc kubenswrapper[5119]: I0121 10:18:08.801503 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"1154d85d-dc29-49ea-9f9d-e5264f980b9c","Type":"ContainerDied","Data":"13836c1981f755c08a5be24d8cf7b275ea4554039533960815c0aa25e27f16e7"} Jan 21 10:18:09 crc kubenswrapper[5119]: I0121 10:18:09.809258 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9bfc" event={"ID":"6983b04a-0517-41e1-82ae-8978727b6e3d","Type":"ContainerStarted","Data":"9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0"} Jan 21 10:18:09 crc kubenswrapper[5119]: I0121 10:18:09.824689 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f9bfc" podStartSLOduration=4.898634941 podStartE2EDuration="5.824672189s" podCreationTimestamp="2026-01-21 10:18:04 +0000 UTC" firstStartedPulling="2026-01-21 10:18:06.782383734 +0000 UTC m=+1402.450475422" lastFinishedPulling="2026-01-21 10:18:07.708420982 +0000 UTC m=+1403.376512670" observedRunningTime="2026-01-21 10:18:09.822985033 +0000 UTC m=+1405.491076731" watchObservedRunningTime="2026-01-21 10:18:09.824672189 +0000 UTC m=+1405.492763867" Jan 21 10:18:10 crc kubenswrapper[5119]: I0121 10:18:10.817597 5119 generic.go:358] "Generic (PLEG): container finished" podID="1154d85d-dc29-49ea-9f9d-e5264f980b9c" containerID="3b09b4db6682661e36c3fc3c3daab910e5b6e4157e0c2016715da117b8a1a8ed" exitCode=0 Jan 21 10:18:10 crc kubenswrapper[5119]: I0121 10:18:10.817651 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"1154d85d-dc29-49ea-9f9d-e5264f980b9c","Type":"ContainerDied","Data":"3b09b4db6682661e36c3fc3c3daab910e5b6e4157e0c2016715da117b8a1a8ed"} Jan 21 10:18:10 crc kubenswrapper[5119]: I0121 10:18:10.850018 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_1154d85d-dc29-49ea-9f9d-e5264f980b9c/manage-dockerfile/0.log" Jan 21 10:18:11 crc kubenswrapper[5119]: I0121 10:18:11.825593 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"1154d85d-dc29-49ea-9f9d-e5264f980b9c","Type":"ContainerStarted","Data":"1c7fd1d8c4729a50b6aeff5fec0e4876ba8d35d7f950528e1fab456cb7c4964b"} Jan 21 10:18:11 crc kubenswrapper[5119]: I0121 10:18:11.852517 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-bundle-2-build" podStartSLOduration=5.852499758 podStartE2EDuration="5.852499758s" podCreationTimestamp="2026-01-21 10:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:18:11.850687319 +0000 UTC m=+1407.518778997" watchObservedRunningTime="2026-01-21 10:18:11.852499758 +0000 UTC m=+1407.520591436" Jan 21 10:18:16 crc kubenswrapper[5119]: I0121 10:18:16.185913 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:16 crc kubenswrapper[5119]: I0121 10:18:16.186529 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:16 crc kubenswrapper[5119]: I0121 10:18:16.222782 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:16 crc kubenswrapper[5119]: I0121 10:18:16.864244 5119 generic.go:358] "Generic (PLEG): container finished" podID="1154d85d-dc29-49ea-9f9d-e5264f980b9c" containerID="1c7fd1d8c4729a50b6aeff5fec0e4876ba8d35d7f950528e1fab456cb7c4964b" exitCode=0 Jan 21 10:18:16 crc kubenswrapper[5119]: I0121 10:18:16.864530 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"1154d85d-dc29-49ea-9f9d-e5264f980b9c","Type":"ContainerDied","Data":"1c7fd1d8c4729a50b6aeff5fec0e4876ba8d35d7f950528e1fab456cb7c4964b"} Jan 21 10:18:16 crc kubenswrapper[5119]: I0121 10:18:16.907041 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:16 crc kubenswrapper[5119]: I0121 10:18:16.942931 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9bfc"] Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.131721 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.275958 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-run\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276068 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-push\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276089 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-node-pullsecrets\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276112 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildcachedir\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276157 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-blob-cache\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276193 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-ca-bundles\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276235 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-root\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276265 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-pull\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276292 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildworkdir\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276311 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-proxy-ca-bundles\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276358 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljc5h\" (UniqueName: \"kubernetes.io/projected/1154d85d-dc29-49ea-9f9d-e5264f980b9c-kube-api-access-ljc5h\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276410 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-system-configs\") pod \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\" (UID: \"1154d85d-dc29-49ea-9f9d-e5264f980b9c\") " Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276824 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.276970 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.277341 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.277394 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.277508 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.277684 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.278398 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.279221 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.283467 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.283586 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.283768 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1154d85d-dc29-49ea-9f9d-e5264f980b9c-kube-api-access-ljc5h" (OuterVolumeSpecName: "kube-api-access-ljc5h") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "kube-api-access-ljc5h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.285517 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1154d85d-dc29-49ea-9f9d-e5264f980b9c" (UID: "1154d85d-dc29-49ea-9f9d-e5264f980b9c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378127 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378164 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378173 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378181 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378190 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378200 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378208 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/1154d85d-dc29-49ea-9f9d-e5264f980b9c-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378218 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378226 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378234 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ljc5h\" (UniqueName: \"kubernetes.io/projected/1154d85d-dc29-49ea-9f9d-e5264f980b9c-kube-api-access-ljc5h\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378242 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1154d85d-dc29-49ea-9f9d-e5264f980b9c-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.378252 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1154d85d-dc29-49ea-9f9d-e5264f980b9c-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.886108 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.886873 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f9bfc" podUID="6983b04a-0517-41e1-82ae-8978727b6e3d" containerName="registry-server" containerID="cri-o://9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0" gracePeriod=2 Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.887060 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"1154d85d-dc29-49ea-9f9d-e5264f980b9c","Type":"ContainerDied","Data":"8411c1aa4add88d6a89498d45e9f6bf8f1303012cab67a44b29927ffc06df5b7"} Jan 21 10:18:18 crc kubenswrapper[5119]: I0121 10:18:18.887388 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8411c1aa4add88d6a89498d45e9f6bf8f1303012cab67a44b29927ffc06df5b7" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.210471 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.291035 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-catalog-content\") pod \"6983b04a-0517-41e1-82ae-8978727b6e3d\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.291121 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8smrk\" (UniqueName: \"kubernetes.io/projected/6983b04a-0517-41e1-82ae-8978727b6e3d-kube-api-access-8smrk\") pod \"6983b04a-0517-41e1-82ae-8978727b6e3d\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.291281 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-utilities\") pod \"6983b04a-0517-41e1-82ae-8978727b6e3d\" (UID: \"6983b04a-0517-41e1-82ae-8978727b6e3d\") " Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.293167 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-utilities" (OuterVolumeSpecName: "utilities") pod "6983b04a-0517-41e1-82ae-8978727b6e3d" (UID: "6983b04a-0517-41e1-82ae-8978727b6e3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.296658 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6983b04a-0517-41e1-82ae-8978727b6e3d-kube-api-access-8smrk" (OuterVolumeSpecName: "kube-api-access-8smrk") pod "6983b04a-0517-41e1-82ae-8978727b6e3d" (UID: "6983b04a-0517-41e1-82ae-8978727b6e3d"). InnerVolumeSpecName "kube-api-access-8smrk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.339670 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6983b04a-0517-41e1-82ae-8978727b6e3d" (UID: "6983b04a-0517-41e1-82ae-8978727b6e3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.392877 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.392936 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8smrk\" (UniqueName: \"kubernetes.io/projected/6983b04a-0517-41e1-82ae-8978727b6e3d-kube-api-access-8smrk\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.392964 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6983b04a-0517-41e1-82ae-8978727b6e3d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.897724 5119 generic.go:358] "Generic (PLEG): container finished" podID="6983b04a-0517-41e1-82ae-8978727b6e3d" containerID="9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0" exitCode=0 Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.897834 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9bfc" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.897861 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9bfc" event={"ID":"6983b04a-0517-41e1-82ae-8978727b6e3d","Type":"ContainerDied","Data":"9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0"} Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.897920 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9bfc" event={"ID":"6983b04a-0517-41e1-82ae-8978727b6e3d","Type":"ContainerDied","Data":"c2dd16dd7d336aafd93efca9c3e9c6c5b44e87f9bedef9193eb45262978335e5"} Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.897945 5119 scope.go:117] "RemoveContainer" containerID="9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.931159 5119 scope.go:117] "RemoveContainer" containerID="184781fc6c14afd38f345fc3e5d093a35d2e99f0393ad5ae610880621c027bb6" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.931588 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9bfc"] Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.937592 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f9bfc"] Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.958874 5119 scope.go:117] "RemoveContainer" containerID="49a9d33afda69678172e2d9ebf5dc74251623a9a3f74442d608a2f2b328bf9be" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.977509 5119 scope.go:117] "RemoveContainer" containerID="9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0" Jan 21 10:18:19 crc kubenswrapper[5119]: E0121 10:18:19.978031 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0\": container with ID starting with 9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0 not found: ID does not exist" containerID="9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.978090 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0"} err="failed to get container status \"9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0\": rpc error: code = NotFound desc = could not find container \"9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0\": container with ID starting with 9b78b7719b7778a0f6e57e2a461284e165cfa294b38a96bbb9b033fe1c7303c0 not found: ID does not exist" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.978125 5119 scope.go:117] "RemoveContainer" containerID="184781fc6c14afd38f345fc3e5d093a35d2e99f0393ad5ae610880621c027bb6" Jan 21 10:18:19 crc kubenswrapper[5119]: E0121 10:18:19.978567 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184781fc6c14afd38f345fc3e5d093a35d2e99f0393ad5ae610880621c027bb6\": container with ID starting with 184781fc6c14afd38f345fc3e5d093a35d2e99f0393ad5ae610880621c027bb6 not found: ID does not exist" containerID="184781fc6c14afd38f345fc3e5d093a35d2e99f0393ad5ae610880621c027bb6" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.978598 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184781fc6c14afd38f345fc3e5d093a35d2e99f0393ad5ae610880621c027bb6"} err="failed to get container status \"184781fc6c14afd38f345fc3e5d093a35d2e99f0393ad5ae610880621c027bb6\": rpc error: code = NotFound desc = could not find container \"184781fc6c14afd38f345fc3e5d093a35d2e99f0393ad5ae610880621c027bb6\": container with ID starting with 184781fc6c14afd38f345fc3e5d093a35d2e99f0393ad5ae610880621c027bb6 not found: ID does not exist" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.978634 5119 scope.go:117] "RemoveContainer" containerID="49a9d33afda69678172e2d9ebf5dc74251623a9a3f74442d608a2f2b328bf9be" Jan 21 10:18:19 crc kubenswrapper[5119]: E0121 10:18:19.979334 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49a9d33afda69678172e2d9ebf5dc74251623a9a3f74442d608a2f2b328bf9be\": container with ID starting with 49a9d33afda69678172e2d9ebf5dc74251623a9a3f74442d608a2f2b328bf9be not found: ID does not exist" containerID="49a9d33afda69678172e2d9ebf5dc74251623a9a3f74442d608a2f2b328bf9be" Jan 21 10:18:19 crc kubenswrapper[5119]: I0121 10:18:19.979361 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a9d33afda69678172e2d9ebf5dc74251623a9a3f74442d608a2f2b328bf9be"} err="failed to get container status \"49a9d33afda69678172e2d9ebf5dc74251623a9a3f74442d608a2f2b328bf9be\": rpc error: code = NotFound desc = could not find container \"49a9d33afda69678172e2d9ebf5dc74251623a9a3f74442d608a2f2b328bf9be\": container with ID starting with 49a9d33afda69678172e2d9ebf5dc74251623a9a3f74442d608a2f2b328bf9be not found: ID does not exist" Jan 21 10:18:20 crc kubenswrapper[5119]: I0121 10:18:20.598882 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6983b04a-0517-41e1-82ae-8978727b6e3d" path="/var/lib/kubelet/pods/6983b04a-0517-41e1-82ae-8978727b6e3d/volumes" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.598071 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.598955 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6983b04a-0517-41e1-82ae-8978727b6e3d" containerName="registry-server" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.598969 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6983b04a-0517-41e1-82ae-8978727b6e3d" containerName="registry-server" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.598983 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1154d85d-dc29-49ea-9f9d-e5264f980b9c" containerName="docker-build" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.598988 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1154d85d-dc29-49ea-9f9d-e5264f980b9c" containerName="docker-build" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.599005 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6983b04a-0517-41e1-82ae-8978727b6e3d" containerName="extract-utilities" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.599010 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6983b04a-0517-41e1-82ae-8978727b6e3d" containerName="extract-utilities" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.599016 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1154d85d-dc29-49ea-9f9d-e5264f980b9c" containerName="git-clone" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.599021 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1154d85d-dc29-49ea-9f9d-e5264f980b9c" containerName="git-clone" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.599029 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1154d85d-dc29-49ea-9f9d-e5264f980b9c" containerName="manage-dockerfile" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.599034 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1154d85d-dc29-49ea-9f9d-e5264f980b9c" containerName="manage-dockerfile" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.599055 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6983b04a-0517-41e1-82ae-8978727b6e3d" containerName="extract-content" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.599060 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6983b04a-0517-41e1-82ae-8978727b6e3d" containerName="extract-content" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.599156 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="6983b04a-0517-41e1-82ae-8978727b6e3d" containerName="registry-server" Jan 21 10:18:22 crc kubenswrapper[5119]: I0121 10:18:22.599165 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="1154d85d-dc29-49ea-9f9d-e5264f980b9c" containerName="docker-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.151866 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.151982 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.156229 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-global-ca\"" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.156239 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-sys-config\"" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.156387 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-ca\"" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.159482 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.271247 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.271359 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.271439 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.271575 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.271659 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.271753 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.271869 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.271947 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkbhg\" (UniqueName: \"kubernetes.io/projected/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-kube-api-access-zkbhg\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.272003 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.272088 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.272207 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.272320 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.374072 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.374588 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.374585 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.374711 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.374747 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.374789 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.374867 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.374913 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.374915 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.374976 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.375037 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.375058 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkbhg\" (UniqueName: \"kubernetes.io/projected/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-kube-api-access-zkbhg\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.375104 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.375132 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.375270 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.375433 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.375695 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.375932 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.375932 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.376234 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.376231 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.385655 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.385844 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.396028 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkbhg\" (UniqueName: \"kubernetes.io/projected/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-kube-api-access-zkbhg\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.474825 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.715157 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 21 10:18:25 crc kubenswrapper[5119]: I0121 10:18:25.938968 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"e90bbb1f-aee5-4264-ac01-e97e9c1349e8","Type":"ContainerStarted","Data":"c81f95eadbea86b623d4825e05905179f9d7504e65ae2a51053dc807a0484254"} Jan 21 10:18:27 crc kubenswrapper[5119]: I0121 10:18:27.954936 5119 generic.go:358] "Generic (PLEG): container finished" podID="e90bbb1f-aee5-4264-ac01-e97e9c1349e8" containerID="baf092aef4f8e068011115e07621e0fd54e5919a6ac3aeb96a12dc2c5341ec81" exitCode=0 Jan 21 10:18:27 crc kubenswrapper[5119]: I0121 10:18:27.954992 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"e90bbb1f-aee5-4264-ac01-e97e9c1349e8","Type":"ContainerDied","Data":"baf092aef4f8e068011115e07621e0fd54e5919a6ac3aeb96a12dc2c5341ec81"} Jan 21 10:18:28 crc kubenswrapper[5119]: I0121 10:18:28.963651 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_e90bbb1f-aee5-4264-ac01-e97e9c1349e8/docker-build/0.log" Jan 21 10:18:28 crc kubenswrapper[5119]: I0121 10:18:28.964967 5119 generic.go:358] "Generic (PLEG): container finished" podID="e90bbb1f-aee5-4264-ac01-e97e9c1349e8" containerID="7de6ec9856e965b8204cf44dbfd193c0e5bc9c0937f19e2a722017a7e6b76816" exitCode=1 Jan 21 10:18:28 crc kubenswrapper[5119]: I0121 10:18:28.965112 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"e90bbb1f-aee5-4264-ac01-e97e9c1349e8","Type":"ContainerDied","Data":"7de6ec9856e965b8204cf44dbfd193c0e5bc9c0937f19e2a722017a7e6b76816"} Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.246940 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_e90bbb1f-aee5-4264-ac01-e97e9c1349e8/docker-build/0.log" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.248505 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334297 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildworkdir\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334346 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-system-configs\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334456 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-push\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334483 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-ca-bundles\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334530 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-run\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334622 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-blob-cache\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334665 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-pull\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334700 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-root\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334761 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkbhg\" (UniqueName: \"kubernetes.io/projected/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-kube-api-access-zkbhg\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334795 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-node-pullsecrets\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334828 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-proxy-ca-bundles\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.334854 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildcachedir\") pod \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\" (UID: \"e90bbb1f-aee5-4264-ac01-e97e9c1349e8\") " Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.335473 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.335628 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.335714 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.335811 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.335823 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.335833 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.335993 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.336025 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.336453 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.336524 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.337018 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.337202 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.341427 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.341520 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-kube-api-access-zkbhg" (OuterVolumeSpecName: "kube-api-access-zkbhg") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "kube-api-access-zkbhg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.341817 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "e90bbb1f-aee5-4264-ac01-e97e9c1349e8" (UID: "e90bbb1f-aee5-4264-ac01-e97e9c1349e8"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.437138 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zkbhg\" (UniqueName: \"kubernetes.io/projected/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-kube-api-access-zkbhg\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.437166 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.437200 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.437209 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.437217 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.437226 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.437234 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.437242 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.437268 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e90bbb1f-aee5-4264-ac01-e97e9c1349e8-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.981974 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_e90bbb1f-aee5-4264-ac01-e97e9c1349e8/docker-build/0.log" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.982628 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.982705 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"e90bbb1f-aee5-4264-ac01-e97e9c1349e8","Type":"ContainerDied","Data":"c81f95eadbea86b623d4825e05905179f9d7504e65ae2a51053dc807a0484254"} Jan 21 10:18:30 crc kubenswrapper[5119]: I0121 10:18:30.982766 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c81f95eadbea86b623d4825e05905179f9d7504e65ae2a51053dc807a0484254" Jan 21 10:18:33 crc kubenswrapper[5119]: I0121 10:18:33.513699 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 21 10:18:33 crc kubenswrapper[5119]: I0121 10:18:33.519747 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 21 10:18:34 crc kubenswrapper[5119]: I0121 10:18:34.597775 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e90bbb1f-aee5-4264-ac01-e97e9c1349e8" path="/var/lib/kubelet/pods/e90bbb1f-aee5-4264-ac01-e97e9c1349e8/volumes" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.220775 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.222477 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e90bbb1f-aee5-4264-ac01-e97e9c1349e8" containerName="manage-dockerfile" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.222520 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90bbb1f-aee5-4264-ac01-e97e9c1349e8" containerName="manage-dockerfile" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.222560 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e90bbb1f-aee5-4264-ac01-e97e9c1349e8" containerName="docker-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.222566 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90bbb1f-aee5-4264-ac01-e97e9c1349e8" containerName="docker-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.222709 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e90bbb1f-aee5-4264-ac01-e97e9c1349e8" containerName="docker-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.233160 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.235761 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-sys-config\"" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.235764 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-global-ca\"" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.237185 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.238343 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.239397 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-ca\"" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299541 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299580 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299630 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299649 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299670 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299714 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299732 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299752 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299768 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299796 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6dzx\" (UniqueName: \"kubernetes.io/projected/bbb62a87-888f-486e-9087-557a47d4754c-kube-api-access-n6dzx\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299812 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.299831 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.402682 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.403116 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.403265 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.403350 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.403433 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.403593 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.403709 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.403822 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.403736 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.403712 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.403925 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.404005 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.404020 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n6dzx\" (UniqueName: \"kubernetes.io/projected/bbb62a87-888f-486e-9087-557a47d4754c-kube-api-access-n6dzx\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.404041 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.404063 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.404149 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.404200 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.404356 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.404395 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.405032 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.405236 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.409505 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.416042 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.421515 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6dzx\" (UniqueName: \"kubernetes.io/projected/bbb62a87-888f-486e-9087-557a47d4754c-kube-api-access-n6dzx\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.554510 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:35 crc kubenswrapper[5119]: I0121 10:18:35.787840 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 21 10:18:35 crc kubenswrapper[5119]: W0121 10:18:35.797133 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb62a87_888f_486e_9087_557a47d4754c.slice/crio-1b7260681d5c20c1a5aaf76b1c724d05548bbd998e8dcf6933b27861c5a1f8cd WatchSource:0}: Error finding container 1b7260681d5c20c1a5aaf76b1c724d05548bbd998e8dcf6933b27861c5a1f8cd: Status 404 returned error can't find the container with id 1b7260681d5c20c1a5aaf76b1c724d05548bbd998e8dcf6933b27861c5a1f8cd Jan 21 10:18:36 crc kubenswrapper[5119]: I0121 10:18:36.015372 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bbb62a87-888f-486e-9087-557a47d4754c","Type":"ContainerStarted","Data":"1b7260681d5c20c1a5aaf76b1c724d05548bbd998e8dcf6933b27861c5a1f8cd"} Jan 21 10:18:37 crc kubenswrapper[5119]: I0121 10:18:37.021354 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bbb62a87-888f-486e-9087-557a47d4754c","Type":"ContainerStarted","Data":"b8484af80379198f70752a37ba0f140fe8ed0dc4679fb825063a066b0b19ca7b"} Jan 21 10:18:38 crc kubenswrapper[5119]: I0121 10:18:38.028205 5119 generic.go:358] "Generic (PLEG): container finished" podID="bbb62a87-888f-486e-9087-557a47d4754c" containerID="b8484af80379198f70752a37ba0f140fe8ed0dc4679fb825063a066b0b19ca7b" exitCode=0 Jan 21 10:18:38 crc kubenswrapper[5119]: I0121 10:18:38.028300 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bbb62a87-888f-486e-9087-557a47d4754c","Type":"ContainerDied","Data":"b8484af80379198f70752a37ba0f140fe8ed0dc4679fb825063a066b0b19ca7b"} Jan 21 10:18:39 crc kubenswrapper[5119]: I0121 10:18:39.036722 5119 generic.go:358] "Generic (PLEG): container finished" podID="bbb62a87-888f-486e-9087-557a47d4754c" containerID="7be507abf0db916deff1abf19e4b657a23178c8206dec3ae40a99bc7466a27ea" exitCode=0 Jan 21 10:18:39 crc kubenswrapper[5119]: I0121 10:18:39.036788 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bbb62a87-888f-486e-9087-557a47d4754c","Type":"ContainerDied","Data":"7be507abf0db916deff1abf19e4b657a23178c8206dec3ae40a99bc7466a27ea"} Jan 21 10:18:39 crc kubenswrapper[5119]: I0121 10:18:39.069065 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_bbb62a87-888f-486e-9087-557a47d4754c/manage-dockerfile/0.log" Jan 21 10:18:40 crc kubenswrapper[5119]: I0121 10:18:40.046925 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bbb62a87-888f-486e-9087-557a47d4754c","Type":"ContainerStarted","Data":"2cf5ed38dd30234bb1dc96890ee3fcf8bfd40676b77d75bbead8640b09addc88"} Jan 21 10:18:40 crc kubenswrapper[5119]: I0121 10:18:40.076058 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=5.076040132 podStartE2EDuration="5.076040132s" podCreationTimestamp="2026-01-21 10:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:18:40.069646828 +0000 UTC m=+1435.737738516" watchObservedRunningTime="2026-01-21 10:18:40.076040132 +0000 UTC m=+1435.744131810" Jan 21 10:18:45 crc kubenswrapper[5119]: I0121 10:18:45.084036 5119 generic.go:358] "Generic (PLEG): container finished" podID="bbb62a87-888f-486e-9087-557a47d4754c" containerID="2cf5ed38dd30234bb1dc96890ee3fcf8bfd40676b77d75bbead8640b09addc88" exitCode=0 Jan 21 10:18:45 crc kubenswrapper[5119]: I0121 10:18:45.084109 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bbb62a87-888f-486e-9087-557a47d4754c","Type":"ContainerDied","Data":"2cf5ed38dd30234bb1dc96890ee3fcf8bfd40676b77d75bbead8640b09addc88"} Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.349390 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.467244 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-node-pullsecrets\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.467524 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-buildworkdir\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.467660 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-system-configs\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.467780 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-build-blob-cache\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.468863 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6dzx\" (UniqueName: \"kubernetes.io/projected/bbb62a87-888f-486e-9087-557a47d4754c-kube-api-access-n6dzx\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.469759 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-buildcachedir\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.469872 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-pull\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.471036 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-ca-bundles\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.467388 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.468177 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.468191 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.468811 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.470863 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.471169 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-proxy-ca-bundles\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.471754 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-root\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.471808 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-run\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.471833 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-push\") pod \"bbb62a87-888f-486e-9087-557a47d4754c\" (UID: \"bbb62a87-888f-486e-9087-557a47d4754c\") " Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.471901 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.472317 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.472341 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.472353 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.472369 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.472379 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbb62a87-888f-486e-9087-557a47d4754c-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.472390 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.472728 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.473487 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.475719 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.477800 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb62a87-888f-486e-9087-557a47d4754c-kube-api-access-n6dzx" (OuterVolumeSpecName: "kube-api-access-n6dzx") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "kube-api-access-n6dzx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.479211 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.482825 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "bbb62a87-888f-486e-9087-557a47d4754c" (UID: "bbb62a87-888f-486e-9087-557a47d4754c"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.574080 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbb62a87-888f-486e-9087-557a47d4754c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.574107 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.574117 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbb62a87-888f-486e-9087-557a47d4754c-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.574125 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.574136 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n6dzx\" (UniqueName: \"kubernetes.io/projected/bbb62a87-888f-486e-9087-557a47d4754c-kube-api-access-n6dzx\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:46 crc kubenswrapper[5119]: I0121 10:18:46.574146 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/bbb62a87-888f-486e-9087-557a47d4754c-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:47 crc kubenswrapper[5119]: I0121 10:18:47.105723 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 21 10:18:47 crc kubenswrapper[5119]: I0121 10:18:47.105757 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"bbb62a87-888f-486e-9087-557a47d4754c","Type":"ContainerDied","Data":"1b7260681d5c20c1a5aaf76b1c724d05548bbd998e8dcf6933b27861c5a1f8cd"} Jan 21 10:18:47 crc kubenswrapper[5119]: I0121 10:18:47.106242 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b7260681d5c20c1a5aaf76b1c724d05548bbd998e8dcf6933b27861c5a1f8cd" Jan 21 10:18:47 crc kubenswrapper[5119]: I0121 10:18:47.702139 5119 scope.go:117] "RemoveContainer" containerID="6b2667245906ac565f5c1bb307f1b14021920f44ce31f173d52c9469311cbca8" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.204181 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-llnbn"] Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.205487 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bbb62a87-888f-486e-9087-557a47d4754c" containerName="git-clone" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.205519 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb62a87-888f-486e-9087-557a47d4754c" containerName="git-clone" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.205563 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bbb62a87-888f-486e-9087-557a47d4754c" containerName="docker-build" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.205575 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb62a87-888f-486e-9087-557a47d4754c" containerName="docker-build" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.205653 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bbb62a87-888f-486e-9087-557a47d4754c" containerName="manage-dockerfile" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.205667 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb62a87-888f-486e-9087-557a47d4754c" containerName="manage-dockerfile" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.205853 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="bbb62a87-888f-486e-9087-557a47d4754c" containerName="docker-build" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.231792 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-llnbn"] Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.231965 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.320721 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-catalog-content\") pod \"redhat-operators-llnbn\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.320784 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br66z\" (UniqueName: \"kubernetes.io/projected/1fdcbd33-5079-4bd4-aa03-96370c99945f-kube-api-access-br66z\") pod \"redhat-operators-llnbn\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.320839 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-utilities\") pod \"redhat-operators-llnbn\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.422107 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-utilities\") pod \"redhat-operators-llnbn\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.422467 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-catalog-content\") pod \"redhat-operators-llnbn\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.422504 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-br66z\" (UniqueName: \"kubernetes.io/projected/1fdcbd33-5079-4bd4-aa03-96370c99945f-kube-api-access-br66z\") pod \"redhat-operators-llnbn\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.422709 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-utilities\") pod \"redhat-operators-llnbn\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.422986 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-catalog-content\") pod \"redhat-operators-llnbn\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.440861 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-br66z\" (UniqueName: \"kubernetes.io/projected/1fdcbd33-5079-4bd4-aa03-96370c99945f-kube-api-access-br66z\") pod \"redhat-operators-llnbn\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.560703 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:18:50 crc kubenswrapper[5119]: I0121 10:18:50.804946 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-llnbn"] Jan 21 10:18:51 crc kubenswrapper[5119]: I0121 10:18:51.130232 5119 generic.go:358] "Generic (PLEG): container finished" podID="1fdcbd33-5079-4bd4-aa03-96370c99945f" containerID="31218e2859902f65d9d6ea50ee85e6cc32f60c4fa11f293ca710067f67c5cd51" exitCode=0 Jan 21 10:18:51 crc kubenswrapper[5119]: I0121 10:18:51.130327 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llnbn" event={"ID":"1fdcbd33-5079-4bd4-aa03-96370c99945f","Type":"ContainerDied","Data":"31218e2859902f65d9d6ea50ee85e6cc32f60c4fa11f293ca710067f67c5cd51"} Jan 21 10:18:51 crc kubenswrapper[5119]: I0121 10:18:51.130654 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llnbn" event={"ID":"1fdcbd33-5079-4bd4-aa03-96370c99945f","Type":"ContainerStarted","Data":"eed28a91a3bde432eeae4f17d383455291796d0cc2231ae019adce78e168d7cd"} Jan 21 10:18:54 crc kubenswrapper[5119]: I0121 10:18:54.158014 5119 generic.go:358] "Generic (PLEG): container finished" podID="1fdcbd33-5079-4bd4-aa03-96370c99945f" containerID="b66d9c2d9404640783ffdfae5c930ef0538aeed9391b5dc97a1da1d8e55060f0" exitCode=0 Jan 21 10:18:54 crc kubenswrapper[5119]: I0121 10:18:54.158195 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llnbn" event={"ID":"1fdcbd33-5079-4bd4-aa03-96370c99945f","Type":"ContainerDied","Data":"b66d9c2d9404640783ffdfae5c930ef0538aeed9391b5dc97a1da1d8e55060f0"} Jan 21 10:18:55 crc kubenswrapper[5119]: I0121 10:18:55.167399 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llnbn" event={"ID":"1fdcbd33-5079-4bd4-aa03-96370c99945f","Type":"ContainerStarted","Data":"4abb8f65e0b1ed22ba65c30ee144d87f51a62a195d3ed589ad1731b122e25184"} Jan 21 10:19:00 crc kubenswrapper[5119]: I0121 10:19:00.561399 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:19:00 crc kubenswrapper[5119]: I0121 10:19:00.562816 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:19:00 crc kubenswrapper[5119]: I0121 10:19:00.606783 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:19:00 crc kubenswrapper[5119]: I0121 10:19:00.628940 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-llnbn" podStartSLOduration=8.375419664 podStartE2EDuration="10.628923052s" podCreationTimestamp="2026-01-21 10:18:50 +0000 UTC" firstStartedPulling="2026-01-21 10:18:51.131212106 +0000 UTC m=+1446.799303784" lastFinishedPulling="2026-01-21 10:18:53.384715494 +0000 UTC m=+1449.052807172" observedRunningTime="2026-01-21 10:18:55.197857723 +0000 UTC m=+1450.865949401" watchObservedRunningTime="2026-01-21 10:19:00.628923052 +0000 UTC m=+1456.297014730" Jan 21 10:19:01 crc kubenswrapper[5119]: I0121 10:19:01.244784 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:19:01 crc kubenswrapper[5119]: I0121 10:19:01.287406 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-llnbn"] Jan 21 10:19:03 crc kubenswrapper[5119]: I0121 10:19:03.217728 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-llnbn" podUID="1fdcbd33-5079-4bd4-aa03-96370c99945f" containerName="registry-server" containerID="cri-o://4abb8f65e0b1ed22ba65c30ee144d87f51a62a195d3ed589ad1731b122e25184" gracePeriod=2 Jan 21 10:19:04 crc kubenswrapper[5119]: I0121 10:19:04.640007 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 21 10:19:06 crc kubenswrapper[5119]: I0121 10:19:06.241584 5119 generic.go:358] "Generic (PLEG): container finished" podID="1fdcbd33-5079-4bd4-aa03-96370c99945f" containerID="4abb8f65e0b1ed22ba65c30ee144d87f51a62a195d3ed589ad1731b122e25184" exitCode=0 Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.689967 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.690136 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.694548 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-llwsp\"" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.694839 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.694980 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.697917 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.697924 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.700289 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llnbn" event={"ID":"1fdcbd33-5079-4bd4-aa03-96370c99945f","Type":"ContainerDied","Data":"4abb8f65e0b1ed22ba65c30ee144d87f51a62a195d3ed589ad1731b122e25184"} Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.788937 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.788988 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.789023 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.789041 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.789065 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.789083 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tsfb\" (UniqueName: \"kubernetes.io/projected/9001b215-32b9-49f3-bb75-fd770950053e-kube-api-access-5tsfb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.789100 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.789120 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.789145 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.789196 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.789240 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.789269 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.789284 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.890900 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.890954 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.891018 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.891043 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.891066 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.891087 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.891117 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.891137 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5tsfb\" (UniqueName: \"kubernetes.io/projected/9001b215-32b9-49f3-bb75-fd770950053e-kube-api-access-5tsfb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.891162 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.891193 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.891217 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.891260 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.891315 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.892800 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.897287 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.897337 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.898226 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.899028 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.899207 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.899273 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.900800 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.901082 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.901539 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.901733 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.902051 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.921974 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tsfb\" (UniqueName: \"kubernetes.io/projected/9001b215-32b9-49f3-bb75-fd770950053e-kube-api-access-5tsfb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.971447 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.992782 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-catalog-content\") pod \"1fdcbd33-5079-4bd4-aa03-96370c99945f\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " Jan 21 10:19:07 crc kubenswrapper[5119]: I0121 10:19:07.993441 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br66z\" (UniqueName: \"kubernetes.io/projected/1fdcbd33-5079-4bd4-aa03-96370c99945f-kube-api-access-br66z\") pod \"1fdcbd33-5079-4bd4-aa03-96370c99945f\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.001340 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-utilities\") pod \"1fdcbd33-5079-4bd4-aa03-96370c99945f\" (UID: \"1fdcbd33-5079-4bd4-aa03-96370c99945f\") " Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.002295 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-utilities" (OuterVolumeSpecName: "utilities") pod "1fdcbd33-5079-4bd4-aa03-96370c99945f" (UID: "1fdcbd33-5079-4bd4-aa03-96370c99945f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.007280 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fdcbd33-5079-4bd4-aa03-96370c99945f-kube-api-access-br66z" (OuterVolumeSpecName: "kube-api-access-br66z") pod "1fdcbd33-5079-4bd4-aa03-96370c99945f" (UID: "1fdcbd33-5079-4bd4-aa03-96370c99945f"). InnerVolumeSpecName "kube-api-access-br66z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.010120 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.103366 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-br66z\" (UniqueName: \"kubernetes.io/projected/1fdcbd33-5079-4bd4-aa03-96370c99945f-kube-api-access-br66z\") on node \"crc\" DevicePath \"\"" Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.103757 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.115958 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fdcbd33-5079-4bd4-aa03-96370c99945f" (UID: "1fdcbd33-5079-4bd4-aa03-96370c99945f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.205198 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fdcbd33-5079-4bd4-aa03-96370c99945f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.208440 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.259116 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llnbn" event={"ID":"1fdcbd33-5079-4bd4-aa03-96370c99945f","Type":"ContainerDied","Data":"eed28a91a3bde432eeae4f17d383455291796d0cc2231ae019adce78e168d7cd"} Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.259166 5119 scope.go:117] "RemoveContainer" containerID="4abb8f65e0b1ed22ba65c30ee144d87f51a62a195d3ed589ad1731b122e25184" Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.259283 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llnbn" Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.261520 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9001b215-32b9-49f3-bb75-fd770950053e","Type":"ContainerStarted","Data":"8c128f9f56beca484353e58a5233fc9d2f3eea7195bd4983188237bf907d95ed"} Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.278127 5119 scope.go:117] "RemoveContainer" containerID="b66d9c2d9404640783ffdfae5c930ef0538aeed9391b5dc97a1da1d8e55060f0" Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.290803 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-llnbn"] Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.295674 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-llnbn"] Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.309814 5119 scope.go:117] "RemoveContainer" containerID="31218e2859902f65d9d6ea50ee85e6cc32f60c4fa11f293ca710067f67c5cd51" Jan 21 10:19:08 crc kubenswrapper[5119]: I0121 10:19:08.601358 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fdcbd33-5079-4bd4-aa03-96370c99945f" path="/var/lib/kubelet/pods/1fdcbd33-5079-4bd4-aa03-96370c99945f/volumes" Jan 21 10:19:09 crc kubenswrapper[5119]: I0121 10:19:09.272016 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9001b215-32b9-49f3-bb75-fd770950053e","Type":"ContainerStarted","Data":"bbc8f90978d2be79d230dd78b7253d32aa45b71d397c543670b00f1c3990f1eb"} Jan 21 10:19:10 crc kubenswrapper[5119]: I0121 10:19:10.280158 5119 generic.go:358] "Generic (PLEG): container finished" podID="9001b215-32b9-49f3-bb75-fd770950053e" containerID="bbc8f90978d2be79d230dd78b7253d32aa45b71d397c543670b00f1c3990f1eb" exitCode=0 Jan 21 10:19:10 crc kubenswrapper[5119]: I0121 10:19:10.280244 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9001b215-32b9-49f3-bb75-fd770950053e","Type":"ContainerDied","Data":"bbc8f90978d2be79d230dd78b7253d32aa45b71d397c543670b00f1c3990f1eb"} Jan 21 10:19:11 crc kubenswrapper[5119]: I0121 10:19:11.292365 5119 generic.go:358] "Generic (PLEG): container finished" podID="9001b215-32b9-49f3-bb75-fd770950053e" containerID="857285118a3d991fb9eaa07c6f9ecc341525b195dca23be93be3ced652838032" exitCode=0 Jan 21 10:19:11 crc kubenswrapper[5119]: I0121 10:19:11.292423 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9001b215-32b9-49f3-bb75-fd770950053e","Type":"ContainerDied","Data":"857285118a3d991fb9eaa07c6f9ecc341525b195dca23be93be3ced652838032"} Jan 21 10:19:11 crc kubenswrapper[5119]: I0121 10:19:11.323118 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_9001b215-32b9-49f3-bb75-fd770950053e/manage-dockerfile/0.log" Jan 21 10:19:12 crc kubenswrapper[5119]: I0121 10:19:12.303032 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9001b215-32b9-49f3-bb75-fd770950053e","Type":"ContainerStarted","Data":"0ea595eaa61644b0d683cd160e2313f91645f88dc3837b0b9e08f080bb43864c"} Jan 21 10:19:13 crc kubenswrapper[5119]: I0121 10:19:13.332621 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=9.332582992 podStartE2EDuration="9.332582992s" podCreationTimestamp="2026-01-21 10:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:19:13.330487124 +0000 UTC m=+1468.998578822" watchObservedRunningTime="2026-01-21 10:19:13.332582992 +0000 UTC m=+1469.000674670" Jan 21 10:19:46 crc kubenswrapper[5119]: I0121 10:19:46.040231 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:19:46 crc kubenswrapper[5119]: I0121 10:19:46.040539 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:19:46 crc kubenswrapper[5119]: I0121 10:19:46.044972 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:19:46 crc kubenswrapper[5119]: I0121 10:19:46.045035 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:19:49 crc kubenswrapper[5119]: I0121 10:19:49.919645 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:19:49 crc kubenswrapper[5119]: I0121 10:19:49.920031 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.134722 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483180-ctgp7"] Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.135751 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1fdcbd33-5079-4bd4-aa03-96370c99945f" containerName="registry-server" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.135765 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fdcbd33-5079-4bd4-aa03-96370c99945f" containerName="registry-server" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.135776 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1fdcbd33-5079-4bd4-aa03-96370c99945f" containerName="extract-utilities" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.135783 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fdcbd33-5079-4bd4-aa03-96370c99945f" containerName="extract-utilities" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.135795 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1fdcbd33-5079-4bd4-aa03-96370c99945f" containerName="extract-content" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.135800 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fdcbd33-5079-4bd4-aa03-96370c99945f" containerName="extract-content" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.135900 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="1fdcbd33-5079-4bd4-aa03-96370c99945f" containerName="registry-server" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.770275 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483180-ctgp7" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.773217 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.773413 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.774364 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.782836 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483180-ctgp7"] Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.824808 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlrs\" (UniqueName: \"kubernetes.io/projected/47bd5ea6-6b18-4142-ba75-c66720a8059e-kube-api-access-pmlrs\") pod \"auto-csr-approver-29483180-ctgp7\" (UID: \"47bd5ea6-6b18-4142-ba75-c66720a8059e\") " pod="openshift-infra/auto-csr-approver-29483180-ctgp7" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.926495 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pmlrs\" (UniqueName: \"kubernetes.io/projected/47bd5ea6-6b18-4142-ba75-c66720a8059e-kube-api-access-pmlrs\") pod \"auto-csr-approver-29483180-ctgp7\" (UID: \"47bd5ea6-6b18-4142-ba75-c66720a8059e\") " pod="openshift-infra/auto-csr-approver-29483180-ctgp7" Jan 21 10:20:00 crc kubenswrapper[5119]: I0121 10:20:00.952074 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmlrs\" (UniqueName: \"kubernetes.io/projected/47bd5ea6-6b18-4142-ba75-c66720a8059e-kube-api-access-pmlrs\") pod \"auto-csr-approver-29483180-ctgp7\" (UID: \"47bd5ea6-6b18-4142-ba75-c66720a8059e\") " pod="openshift-infra/auto-csr-approver-29483180-ctgp7" Jan 21 10:20:01 crc kubenswrapper[5119]: I0121 10:20:01.087823 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483180-ctgp7" Jan 21 10:20:01 crc kubenswrapper[5119]: I0121 10:20:01.499383 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483180-ctgp7"] Jan 21 10:20:01 crc kubenswrapper[5119]: I0121 10:20:01.501492 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:20:01 crc kubenswrapper[5119]: I0121 10:20:01.646297 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483180-ctgp7" event={"ID":"47bd5ea6-6b18-4142-ba75-c66720a8059e","Type":"ContainerStarted","Data":"1a590a92f2983ec935a30fb685864f8c6bd003683fde595d5804d6eec19ca9c7"} Jan 21 10:20:03 crc kubenswrapper[5119]: I0121 10:20:03.663332 5119 generic.go:358] "Generic (PLEG): container finished" podID="47bd5ea6-6b18-4142-ba75-c66720a8059e" containerID="796ccbbdf71ee62d13365dcd21740c2ad758ddca4f7e6aa802a0b075452278ab" exitCode=0 Jan 21 10:20:03 crc kubenswrapper[5119]: I0121 10:20:03.663390 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483180-ctgp7" event={"ID":"47bd5ea6-6b18-4142-ba75-c66720a8059e","Type":"ContainerDied","Data":"796ccbbdf71ee62d13365dcd21740c2ad758ddca4f7e6aa802a0b075452278ab"} Jan 21 10:20:04 crc kubenswrapper[5119]: I0121 10:20:04.673325 5119 generic.go:358] "Generic (PLEG): container finished" podID="9001b215-32b9-49f3-bb75-fd770950053e" containerID="0ea595eaa61644b0d683cd160e2313f91645f88dc3837b0b9e08f080bb43864c" exitCode=0 Jan 21 10:20:04 crc kubenswrapper[5119]: I0121 10:20:04.673428 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9001b215-32b9-49f3-bb75-fd770950053e","Type":"ContainerDied","Data":"0ea595eaa61644b0d683cd160e2313f91645f88dc3837b0b9e08f080bb43864c"} Jan 21 10:20:04 crc kubenswrapper[5119]: I0121 10:20:04.952728 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483180-ctgp7" Jan 21 10:20:05 crc kubenswrapper[5119]: I0121 10:20:05.083885 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmlrs\" (UniqueName: \"kubernetes.io/projected/47bd5ea6-6b18-4142-ba75-c66720a8059e-kube-api-access-pmlrs\") pod \"47bd5ea6-6b18-4142-ba75-c66720a8059e\" (UID: \"47bd5ea6-6b18-4142-ba75-c66720a8059e\") " Jan 21 10:20:05 crc kubenswrapper[5119]: I0121 10:20:05.090793 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47bd5ea6-6b18-4142-ba75-c66720a8059e-kube-api-access-pmlrs" (OuterVolumeSpecName: "kube-api-access-pmlrs") pod "47bd5ea6-6b18-4142-ba75-c66720a8059e" (UID: "47bd5ea6-6b18-4142-ba75-c66720a8059e"). InnerVolumeSpecName "kube-api-access-pmlrs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:20:05 crc kubenswrapper[5119]: I0121 10:20:05.185970 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pmlrs\" (UniqueName: \"kubernetes.io/projected/47bd5ea6-6b18-4142-ba75-c66720a8059e-kube-api-access-pmlrs\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:05 crc kubenswrapper[5119]: I0121 10:20:05.680928 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483180-ctgp7" Jan 21 10:20:05 crc kubenswrapper[5119]: I0121 10:20:05.680938 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483180-ctgp7" event={"ID":"47bd5ea6-6b18-4142-ba75-c66720a8059e","Type":"ContainerDied","Data":"1a590a92f2983ec935a30fb685864f8c6bd003683fde595d5804d6eec19ca9c7"} Jan 21 10:20:05 crc kubenswrapper[5119]: I0121 10:20:05.680991 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a590a92f2983ec935a30fb685864f8c6bd003683fde595d5804d6eec19ca9c7" Jan 21 10:20:05 crc kubenswrapper[5119]: I0121 10:20:05.923263 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.004927 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-pull\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.004977 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tsfb\" (UniqueName: \"kubernetes.io/projected/9001b215-32b9-49f3-bb75-fd770950053e-kube-api-access-5tsfb\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.005051 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-root\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.005109 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-system-configs\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.005455 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-buildcachedir\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.005538 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-proxy-ca-bundles\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.005592 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-push\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.005645 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-buildworkdir\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.005665 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-ca-bundles\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.005705 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-node-pullsecrets\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.005767 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-run\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.005784 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-build-blob-cache\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.005838 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"9001b215-32b9-49f3-bb75-fd770950053e\" (UID: \"9001b215-32b9-49f3-bb75-fd770950053e\") " Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.006858 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.007431 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.007473 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.007574 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.007535 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.009409 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.009430 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.010442 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-push" (OuterVolumeSpecName: "builder-dockercfg-llwsp-push") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "builder-dockercfg-llwsp-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.013439 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-pull" (OuterVolumeSpecName: "builder-dockercfg-llwsp-pull") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "builder-dockercfg-llwsp-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.013577 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9001b215-32b9-49f3-bb75-fd770950053e-kube-api-access-5tsfb" (OuterVolumeSpecName: "kube-api-access-5tsfb") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "kube-api-access-5tsfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.013890 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483174-p2cmj"] Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.018663 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483174-p2cmj"] Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.022742 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.107096 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5tsfb\" (UniqueName: \"kubernetes.io/projected/9001b215-32b9-49f3-bb75-fd770950053e-kube-api-access-5tsfb\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.107130 5119 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.107140 5119 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.107148 5119 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.107156 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-push\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-push\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.107166 5119 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.107175 5119 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9001b215-32b9-49f3-bb75-fd770950053e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.107183 5119 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9001b215-32b9-49f3-bb75-fd770950053e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.107190 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.107198 5119 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.107208 5119 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-llwsp-pull\" (UniqueName: \"kubernetes.io/secret/9001b215-32b9-49f3-bb75-fd770950053e-builder-dockercfg-llwsp-pull\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.215167 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.309750 5119 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.600463 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21745b5e-6ff2-4a6e-a97f-406c11e58793" path="/var/lib/kubelet/pods/21745b5e-6ff2-4a6e-a97f-406c11e58793/volumes" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.688262 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"9001b215-32b9-49f3-bb75-fd770950053e","Type":"ContainerDied","Data":"8c128f9f56beca484353e58a5233fc9d2f3eea7195bd4983188237bf907d95ed"} Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.688309 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c128f9f56beca484353e58a5233fc9d2f3eea7195bd4983188237bf907d95ed" Jan 21 10:20:06 crc kubenswrapper[5119]: I0121 10:20:06.688441 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 21 10:20:07 crc kubenswrapper[5119]: I0121 10:20:07.030097 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9001b215-32b9-49f3-bb75-fd770950053e" (UID: "9001b215-32b9-49f3-bb75-fd770950053e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:20:07 crc kubenswrapper[5119]: I0121 10:20:07.120379 5119 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9001b215-32b9-49f3-bb75-fd770950053e-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.178536 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-9skl8"] Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.179494 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9001b215-32b9-49f3-bb75-fd770950053e" containerName="docker-build" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.179508 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="9001b215-32b9-49f3-bb75-fd770950053e" containerName="docker-build" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.179526 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9001b215-32b9-49f3-bb75-fd770950053e" containerName="git-clone" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.179533 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="9001b215-32b9-49f3-bb75-fd770950053e" containerName="git-clone" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.179545 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="47bd5ea6-6b18-4142-ba75-c66720a8059e" containerName="oc" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.179552 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="47bd5ea6-6b18-4142-ba75-c66720a8059e" containerName="oc" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.179561 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9001b215-32b9-49f3-bb75-fd770950053e" containerName="manage-dockerfile" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.179566 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="9001b215-32b9-49f3-bb75-fd770950053e" containerName="manage-dockerfile" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.179686 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="9001b215-32b9-49f3-bb75-fd770950053e" containerName="docker-build" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.179705 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="47bd5ea6-6b18-4142-ba75-c66720a8059e" containerName="oc" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.228893 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-9skl8"] Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.229063 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9skl8" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.234488 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-9kd7s\"" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.352947 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkfl6\" (UniqueName: \"kubernetes.io/projected/9a8f73da-3e22-403a-914c-cada7a1ef592-kube-api-access-mkfl6\") pod \"infrawatch-operators-9skl8\" (UID: \"9a8f73da-3e22-403a-914c-cada7a1ef592\") " pod="service-telemetry/infrawatch-operators-9skl8" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.453889 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mkfl6\" (UniqueName: \"kubernetes.io/projected/9a8f73da-3e22-403a-914c-cada7a1ef592-kube-api-access-mkfl6\") pod \"infrawatch-operators-9skl8\" (UID: \"9a8f73da-3e22-403a-914c-cada7a1ef592\") " pod="service-telemetry/infrawatch-operators-9skl8" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.474585 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkfl6\" (UniqueName: \"kubernetes.io/projected/9a8f73da-3e22-403a-914c-cada7a1ef592-kube-api-access-mkfl6\") pod \"infrawatch-operators-9skl8\" (UID: \"9a8f73da-3e22-403a-914c-cada7a1ef592\") " pod="service-telemetry/infrawatch-operators-9skl8" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.570185 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9skl8" Jan 21 10:20:09 crc kubenswrapper[5119]: I0121 10:20:09.748199 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-9skl8"] Jan 21 10:20:10 crc kubenswrapper[5119]: I0121 10:20:10.720575 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9skl8" event={"ID":"9a8f73da-3e22-403a-914c-cada7a1ef592","Type":"ContainerStarted","Data":"062acb1eae09960d69153d79682e37470ad2698efc9fa0e6bd8e977e5e2be09c"} Jan 21 10:20:19 crc kubenswrapper[5119]: I0121 10:20:19.781549 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9skl8" event={"ID":"9a8f73da-3e22-403a-914c-cada7a1ef592","Type":"ContainerStarted","Data":"184c23e5a0482b196b18a73c979241f5c5f73f81dde0189214ba08d9055d3cf7"} Jan 21 10:20:19 crc kubenswrapper[5119]: I0121 10:20:19.919042 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:20:19 crc kubenswrapper[5119]: I0121 10:20:19.919109 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:20:29 crc kubenswrapper[5119]: I0121 10:20:29.571049 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-9skl8" Jan 21 10:20:29 crc kubenswrapper[5119]: I0121 10:20:29.572120 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-9skl8" Jan 21 10:20:29 crc kubenswrapper[5119]: I0121 10:20:29.600840 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-9skl8" Jan 21 10:20:29 crc kubenswrapper[5119]: I0121 10:20:29.617416 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-9skl8" podStartSLOduration=11.077275819 podStartE2EDuration="20.617400118s" podCreationTimestamp="2026-01-21 10:20:09 +0000 UTC" firstStartedPulling="2026-01-21 10:20:09.757022904 +0000 UTC m=+1525.425114582" lastFinishedPulling="2026-01-21 10:20:19.297147203 +0000 UTC m=+1534.965238881" observedRunningTime="2026-01-21 10:20:19.79574203 +0000 UTC m=+1535.463833708" watchObservedRunningTime="2026-01-21 10:20:29.617400118 +0000 UTC m=+1545.285491796" Jan 21 10:20:29 crc kubenswrapper[5119]: I0121 10:20:29.650817 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-9skl8" Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.448225 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7"] Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.545362 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7"] Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.545525 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.611227 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfx59\" (UniqueName: \"kubernetes.io/projected/c3c48b5c-d02d-406d-8893-4b4e73df93b5-kube-api-access-qfx59\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.611352 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.611386 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.712802 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qfx59\" (UniqueName: \"kubernetes.io/projected/c3c48b5c-d02d-406d-8893-4b4e73df93b5-kube-api-access-qfx59\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.713258 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.713327 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.713943 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.713985 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.736563 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfx59\" (UniqueName: \"kubernetes.io/projected/c3c48b5c-d02d-406d-8893-4b4e73df93b5-kube-api-access-qfx59\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:43 crc kubenswrapper[5119]: I0121 10:20:43.863741 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.046384 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7"] Jan 21 10:20:44 crc kubenswrapper[5119]: W0121 10:20:44.046388 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3c48b5c_d02d_406d_8893_4b4e73df93b5.slice/crio-5d4011277d6806427268bfdfccd4c638c5dbbeab8fc1bc566781882e14bd7007 WatchSource:0}: Error finding container 5d4011277d6806427268bfdfccd4c638c5dbbeab8fc1bc566781882e14bd7007: Status 404 returned error can't find the container with id 5d4011277d6806427268bfdfccd4c638c5dbbeab8fc1bc566781882e14bd7007 Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.250521 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb"] Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.283918 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb"] Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.284169 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.429330 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nmc9\" (UniqueName: \"kubernetes.io/projected/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-kube-api-access-4nmc9\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.429390 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.429438 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.530557 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.530740 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4nmc9\" (UniqueName: \"kubernetes.io/projected/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-kube-api-access-4nmc9\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.530799 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.531047 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.531312 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.549634 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nmc9\" (UniqueName: \"kubernetes.io/projected/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-kube-api-access-4nmc9\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.626271 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.727094 5119 generic.go:358] "Generic (PLEG): container finished" podID="c3c48b5c-d02d-406d-8893-4b4e73df93b5" containerID="c968faba13abc15246502e8658c457112dc789efe2d550fe117478f9c47bd8ca" exitCode=0 Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.727245 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" event={"ID":"c3c48b5c-d02d-406d-8893-4b4e73df93b5","Type":"ContainerDied","Data":"c968faba13abc15246502e8658c457112dc789efe2d550fe117478f9c47bd8ca"} Jan 21 10:20:44 crc kubenswrapper[5119]: I0121 10:20:44.727631 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" event={"ID":"c3c48b5c-d02d-406d-8893-4b4e73df93b5","Type":"ContainerStarted","Data":"5d4011277d6806427268bfdfccd4c638c5dbbeab8fc1bc566781882e14bd7007"} Jan 21 10:20:45 crc kubenswrapper[5119]: I0121 10:20:45.025503 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb"] Jan 21 10:20:45 crc kubenswrapper[5119]: W0121 10:20:45.346654 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8549968f_d5b0_4ce5_beec_50d16fc6cf3e.slice/crio-9a5cd30c10f89dd495f6e0ba4b27ea18473182df02e7ae2e617fb6dcfb1612b0 WatchSource:0}: Error finding container 9a5cd30c10f89dd495f6e0ba4b27ea18473182df02e7ae2e617fb6dcfb1612b0: Status 404 returned error can't find the container with id 9a5cd30c10f89dd495f6e0ba4b27ea18473182df02e7ae2e617fb6dcfb1612b0 Jan 21 10:20:45 crc kubenswrapper[5119]: I0121 10:20:45.734990 5119 generic.go:358] "Generic (PLEG): container finished" podID="8549968f-d5b0-4ce5-beec-50d16fc6cf3e" containerID="df8c220fe928852bcf9e2021d7d08fa565a6af1c9725a12024ed848ac1a45d0d" exitCode=0 Jan 21 10:20:45 crc kubenswrapper[5119]: I0121 10:20:45.735040 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" event={"ID":"8549968f-d5b0-4ce5-beec-50d16fc6cf3e","Type":"ContainerDied","Data":"df8c220fe928852bcf9e2021d7d08fa565a6af1c9725a12024ed848ac1a45d0d"} Jan 21 10:20:45 crc kubenswrapper[5119]: I0121 10:20:45.735081 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" event={"ID":"8549968f-d5b0-4ce5-beec-50d16fc6cf3e","Type":"ContainerStarted","Data":"9a5cd30c10f89dd495f6e0ba4b27ea18473182df02e7ae2e617fb6dcfb1612b0"} Jan 21 10:20:45 crc kubenswrapper[5119]: I0121 10:20:45.738198 5119 generic.go:358] "Generic (PLEG): container finished" podID="c3c48b5c-d02d-406d-8893-4b4e73df93b5" containerID="a3ca007b7e52b3c023e9006959455ba79e1ccc0b8fde2092738173a6487d0ea7" exitCode=0 Jan 21 10:20:45 crc kubenswrapper[5119]: I0121 10:20:45.738274 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" event={"ID":"c3c48b5c-d02d-406d-8893-4b4e73df93b5","Type":"ContainerDied","Data":"a3ca007b7e52b3c023e9006959455ba79e1ccc0b8fde2092738173a6487d0ea7"} Jan 21 10:20:46 crc kubenswrapper[5119]: I0121 10:20:46.746854 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" event={"ID":"c3c48b5c-d02d-406d-8893-4b4e73df93b5","Type":"ContainerStarted","Data":"489f7dc88922bb68cf13e3084ab54b73596dd5af589d71f5c1db8f42bc766e03"} Jan 21 10:20:47 crc kubenswrapper[5119]: I0121 10:20:47.755765 5119 generic.go:358] "Generic (PLEG): container finished" podID="8549968f-d5b0-4ce5-beec-50d16fc6cf3e" containerID="b5de5060be68fca93f735af7441e51a88f9493223cc561cb2072243ee2ce64c4" exitCode=0 Jan 21 10:20:47 crc kubenswrapper[5119]: I0121 10:20:47.755847 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" event={"ID":"8549968f-d5b0-4ce5-beec-50d16fc6cf3e","Type":"ContainerDied","Data":"b5de5060be68fca93f735af7441e51a88f9493223cc561cb2072243ee2ce64c4"} Jan 21 10:20:47 crc kubenswrapper[5119]: I0121 10:20:47.759235 5119 generic.go:358] "Generic (PLEG): container finished" podID="c3c48b5c-d02d-406d-8893-4b4e73df93b5" containerID="489f7dc88922bb68cf13e3084ab54b73596dd5af589d71f5c1db8f42bc766e03" exitCode=0 Jan 21 10:20:47 crc kubenswrapper[5119]: I0121 10:20:47.759368 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" event={"ID":"c3c48b5c-d02d-406d-8893-4b4e73df93b5","Type":"ContainerDied","Data":"489f7dc88922bb68cf13e3084ab54b73596dd5af589d71f5c1db8f42bc766e03"} Jan 21 10:20:47 crc kubenswrapper[5119]: I0121 10:20:47.860445 5119 scope.go:117] "RemoveContainer" containerID="2e4862dfd5c5374082fa377cf2c8c7fd865efaa7c3b0474857cef9eaf047116b" Jan 21 10:20:48 crc kubenswrapper[5119]: I0121 10:20:48.767500 5119 generic.go:358] "Generic (PLEG): container finished" podID="8549968f-d5b0-4ce5-beec-50d16fc6cf3e" containerID="ccc17ae08503703306862507fec0cd5f3871b95c9eef9458e5a5c0ed74523a48" exitCode=0 Jan 21 10:20:48 crc kubenswrapper[5119]: I0121 10:20:48.767590 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" event={"ID":"8549968f-d5b0-4ce5-beec-50d16fc6cf3e","Type":"ContainerDied","Data":"ccc17ae08503703306862507fec0cd5f3871b95c9eef9458e5a5c0ed74523a48"} Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.098956 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.192266 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfx59\" (UniqueName: \"kubernetes.io/projected/c3c48b5c-d02d-406d-8893-4b4e73df93b5-kube-api-access-qfx59\") pod \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.192781 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-util\") pod \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.192838 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-bundle\") pod \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\" (UID: \"c3c48b5c-d02d-406d-8893-4b4e73df93b5\") " Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.194162 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-bundle" (OuterVolumeSpecName: "bundle") pod "c3c48b5c-d02d-406d-8893-4b4e73df93b5" (UID: "c3c48b5c-d02d-406d-8893-4b4e73df93b5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.200328 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3c48b5c-d02d-406d-8893-4b4e73df93b5-kube-api-access-qfx59" (OuterVolumeSpecName: "kube-api-access-qfx59") pod "c3c48b5c-d02d-406d-8893-4b4e73df93b5" (UID: "c3c48b5c-d02d-406d-8893-4b4e73df93b5"). InnerVolumeSpecName "kube-api-access-qfx59". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.206445 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-util" (OuterVolumeSpecName: "util") pod "c3c48b5c-d02d-406d-8893-4b4e73df93b5" (UID: "c3c48b5c-d02d-406d-8893-4b4e73df93b5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.294278 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.294316 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qfx59\" (UniqueName: \"kubernetes.io/projected/c3c48b5c-d02d-406d-8893-4b4e73df93b5-kube-api-access-qfx59\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.294326 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3c48b5c-d02d-406d-8893-4b4e73df93b5-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.781956 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" event={"ID":"c3c48b5c-d02d-406d-8893-4b4e73df93b5","Type":"ContainerDied","Data":"5d4011277d6806427268bfdfccd4c638c5dbbeab8fc1bc566781882e14bd7007"} Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.782034 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d4011277d6806427268bfdfccd4c638c5dbbeab8fc1bc566781882e14bd7007" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.782163 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.919444 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.919527 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.919587 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.921063 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6ab9884520f29aaaa049d566e0f03918d804eb56ab65d3fab45a8a4f5ef9ba3"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:20:49 crc kubenswrapper[5119]: I0121 10:20:49.921150 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://b6ab9884520f29aaaa049d566e0f03918d804eb56ab65d3fab45a8a4f5ef9ba3" gracePeriod=600 Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.074915 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.207738 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nmc9\" (UniqueName: \"kubernetes.io/projected/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-kube-api-access-4nmc9\") pod \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.207816 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-util\") pod \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.207910 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-bundle\") pod \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\" (UID: \"8549968f-d5b0-4ce5-beec-50d16fc6cf3e\") " Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.209214 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-bundle" (OuterVolumeSpecName: "bundle") pod "8549968f-d5b0-4ce5-beec-50d16fc6cf3e" (UID: "8549968f-d5b0-4ce5-beec-50d16fc6cf3e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.214412 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-kube-api-access-4nmc9" (OuterVolumeSpecName: "kube-api-access-4nmc9") pod "8549968f-d5b0-4ce5-beec-50d16fc6cf3e" (UID: "8549968f-d5b0-4ce5-beec-50d16fc6cf3e"). InnerVolumeSpecName "kube-api-access-4nmc9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.230235 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-util" (OuterVolumeSpecName: "util") pod "8549968f-d5b0-4ce5-beec-50d16fc6cf3e" (UID: "8549968f-d5b0-4ce5-beec-50d16fc6cf3e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.310175 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4nmc9\" (UniqueName: \"kubernetes.io/projected/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-kube-api-access-4nmc9\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.310212 5119 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.310224 5119 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8549968f-d5b0-4ce5-beec-50d16fc6cf3e-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.791772 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" event={"ID":"8549968f-d5b0-4ce5-beec-50d16fc6cf3e","Type":"ContainerDied","Data":"9a5cd30c10f89dd495f6e0ba4b27ea18473182df02e7ae2e617fb6dcfb1612b0"} Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.792431 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a5cd30c10f89dd495f6e0ba4b27ea18473182df02e7ae2e617fb6dcfb1612b0" Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.791820 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb" Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.794646 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="b6ab9884520f29aaaa049d566e0f03918d804eb56ab65d3fab45a8a4f5ef9ba3" exitCode=0 Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.794681 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"b6ab9884520f29aaaa049d566e0f03918d804eb56ab65d3fab45a8a4f5ef9ba3"} Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.794738 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75"} Jan 21 10:20:50 crc kubenswrapper[5119]: I0121 10:20:50.794765 5119 scope.go:117] "RemoveContainer" containerID="e1293b9d5697f64c75dfcb0e9afb6682f3461979e9927eeb7658215e6f071a1d" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.121528 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r"] Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.122991 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c3c48b5c-d02d-406d-8893-4b4e73df93b5" containerName="extract" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123010 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c48b5c-d02d-406d-8893-4b4e73df93b5" containerName="extract" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123043 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8549968f-d5b0-4ce5-beec-50d16fc6cf3e" containerName="pull" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123052 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8549968f-d5b0-4ce5-beec-50d16fc6cf3e" containerName="pull" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123073 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8549968f-d5b0-4ce5-beec-50d16fc6cf3e" containerName="extract" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123083 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8549968f-d5b0-4ce5-beec-50d16fc6cf3e" containerName="extract" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123103 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c3c48b5c-d02d-406d-8893-4b4e73df93b5" containerName="util" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123112 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c48b5c-d02d-406d-8893-4b4e73df93b5" containerName="util" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123122 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8549968f-d5b0-4ce5-beec-50d16fc6cf3e" containerName="util" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123131 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8549968f-d5b0-4ce5-beec-50d16fc6cf3e" containerName="util" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123150 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c3c48b5c-d02d-406d-8893-4b4e73df93b5" containerName="pull" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123159 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c48b5c-d02d-406d-8893-4b4e73df93b5" containerName="pull" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123286 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8549968f-d5b0-4ce5-beec-50d16fc6cf3e" containerName="extract" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.123301 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="c3c48b5c-d02d-406d-8893-4b4e73df93b5" containerName="extract" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.369363 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r"] Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.369498 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.371841 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-llw2x\"" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.475620 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7ede4c81-63e3-44ba-8e96-9dcb8c34adce-runner\") pod \"service-telemetry-operator-84d7cb46fc-c4c9r\" (UID: \"7ede4c81-63e3-44ba-8e96-9dcb8c34adce\") " pod="service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.476048 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dsd9\" (UniqueName: \"kubernetes.io/projected/7ede4c81-63e3-44ba-8e96-9dcb8c34adce-kube-api-access-6dsd9\") pod \"service-telemetry-operator-84d7cb46fc-c4c9r\" (UID: \"7ede4c81-63e3-44ba-8e96-9dcb8c34adce\") " pod="service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.577705 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6dsd9\" (UniqueName: \"kubernetes.io/projected/7ede4c81-63e3-44ba-8e96-9dcb8c34adce-kube-api-access-6dsd9\") pod \"service-telemetry-operator-84d7cb46fc-c4c9r\" (UID: \"7ede4c81-63e3-44ba-8e96-9dcb8c34adce\") " pod="service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.577835 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7ede4c81-63e3-44ba-8e96-9dcb8c34adce-runner\") pod \"service-telemetry-operator-84d7cb46fc-c4c9r\" (UID: \"7ede4c81-63e3-44ba-8e96-9dcb8c34adce\") " pod="service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.578481 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7ede4c81-63e3-44ba-8e96-9dcb8c34adce-runner\") pod \"service-telemetry-operator-84d7cb46fc-c4c9r\" (UID: \"7ede4c81-63e3-44ba-8e96-9dcb8c34adce\") " pod="service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.596547 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dsd9\" (UniqueName: \"kubernetes.io/projected/7ede4c81-63e3-44ba-8e96-9dcb8c34adce-kube-api-access-6dsd9\") pod \"service-telemetry-operator-84d7cb46fc-c4c9r\" (UID: \"7ede4c81-63e3-44ba-8e96-9dcb8c34adce\") " pod="service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.688029 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r" Jan 21 10:20:55 crc kubenswrapper[5119]: I0121 10:20:55.915816 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r"] Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.385571 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-57588ddc85-czgbb"] Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.491802 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-57588ddc85-czgbb"] Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.491964 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-57588ddc85-czgbb" Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.494041 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-xm7nq\"" Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.590077 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jzlp\" (UniqueName: \"kubernetes.io/projected/c7cc2173-29c2-4a8e-ab2b-7e373d79c484-kube-api-access-4jzlp\") pod \"smart-gateway-operator-57588ddc85-czgbb\" (UID: \"c7cc2173-29c2-4a8e-ab2b-7e373d79c484\") " pod="service-telemetry/smart-gateway-operator-57588ddc85-czgbb" Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.590182 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c7cc2173-29c2-4a8e-ab2b-7e373d79c484-runner\") pod \"smart-gateway-operator-57588ddc85-czgbb\" (UID: \"c7cc2173-29c2-4a8e-ab2b-7e373d79c484\") " pod="service-telemetry/smart-gateway-operator-57588ddc85-czgbb" Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.691433 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4jzlp\" (UniqueName: \"kubernetes.io/projected/c7cc2173-29c2-4a8e-ab2b-7e373d79c484-kube-api-access-4jzlp\") pod \"smart-gateway-operator-57588ddc85-czgbb\" (UID: \"c7cc2173-29c2-4a8e-ab2b-7e373d79c484\") " pod="service-telemetry/smart-gateway-operator-57588ddc85-czgbb" Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.691526 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c7cc2173-29c2-4a8e-ab2b-7e373d79c484-runner\") pod \"smart-gateway-operator-57588ddc85-czgbb\" (UID: \"c7cc2173-29c2-4a8e-ab2b-7e373d79c484\") " pod="service-telemetry/smart-gateway-operator-57588ddc85-czgbb" Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.692099 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c7cc2173-29c2-4a8e-ab2b-7e373d79c484-runner\") pod \"smart-gateway-operator-57588ddc85-czgbb\" (UID: \"c7cc2173-29c2-4a8e-ab2b-7e373d79c484\") " pod="service-telemetry/smart-gateway-operator-57588ddc85-czgbb" Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.708774 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jzlp\" (UniqueName: \"kubernetes.io/projected/c7cc2173-29c2-4a8e-ab2b-7e373d79c484-kube-api-access-4jzlp\") pod \"smart-gateway-operator-57588ddc85-czgbb\" (UID: \"c7cc2173-29c2-4a8e-ab2b-7e373d79c484\") " pod="service-telemetry/smart-gateway-operator-57588ddc85-czgbb" Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.809186 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-57588ddc85-czgbb" Jan 21 10:20:56 crc kubenswrapper[5119]: I0121 10:20:56.843725 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r" event={"ID":"7ede4c81-63e3-44ba-8e96-9dcb8c34adce","Type":"ContainerStarted","Data":"f9db10bb401f339030a9185ac8dd1c197edd1b7383ff1d9836861f35ce9eec3f"} Jan 21 10:20:57 crc kubenswrapper[5119]: I0121 10:20:57.282203 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-57588ddc85-czgbb"] Jan 21 10:20:57 crc kubenswrapper[5119]: I0121 10:20:57.856527 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-57588ddc85-czgbb" event={"ID":"c7cc2173-29c2-4a8e-ab2b-7e373d79c484","Type":"ContainerStarted","Data":"83c8b0a7ce80a55ab4ce9c52c2d75651ecf70908957406edbb1797315fcd12ca"} Jan 21 10:21:23 crc kubenswrapper[5119]: I0121 10:21:23.053815 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-57588ddc85-czgbb" event={"ID":"c7cc2173-29c2-4a8e-ab2b-7e373d79c484","Type":"ContainerStarted","Data":"8eb440612dccd4882841b637e66148851059ec6e79237f639e7f390947dc9e72"} Jan 21 10:21:23 crc kubenswrapper[5119]: I0121 10:21:23.066855 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r" event={"ID":"7ede4c81-63e3-44ba-8e96-9dcb8c34adce","Type":"ContainerStarted","Data":"20212026389a57381af733d195e013be258aa06f38d7923b81e08b9921a9846c"} Jan 21 10:21:23 crc kubenswrapper[5119]: I0121 10:21:23.082224 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-57588ddc85-czgbb" podStartSLOduration=2.470297193 podStartE2EDuration="27.082201399s" podCreationTimestamp="2026-01-21 10:20:56 +0000 UTC" firstStartedPulling="2026-01-21 10:20:57.283726303 +0000 UTC m=+1572.951817982" lastFinishedPulling="2026-01-21 10:21:21.89563051 +0000 UTC m=+1597.563722188" observedRunningTime="2026-01-21 10:21:23.072629298 +0000 UTC m=+1598.740720976" watchObservedRunningTime="2026-01-21 10:21:23.082201399 +0000 UTC m=+1598.750293077" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.239028 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-84d7cb46fc-c4c9r" podStartSLOduration=29.2392024 podStartE2EDuration="55.238989698s" podCreationTimestamp="2026-01-21 10:20:55 +0000 UTC" firstStartedPulling="2026-01-21 10:20:55.934006034 +0000 UTC m=+1571.602097712" lastFinishedPulling="2026-01-21 10:21:21.933793332 +0000 UTC m=+1597.601885010" observedRunningTime="2026-01-21 10:21:23.091633457 +0000 UTC m=+1598.759725146" watchObservedRunningTime="2026-01-21 10:21:50.238989698 +0000 UTC m=+1625.907081376" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.243461 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-r8tgq"] Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.252098 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.256283 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.256316 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.256293 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.256463 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.256572 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.256679 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-tk8q4\"" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.256885 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.257470 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-r8tgq"] Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.402335 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.402807 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.402861 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-config\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.402891 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.402919 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-users\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.402951 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.402989 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22d52\" (UniqueName: \"kubernetes.io/projected/532aca70-8c2f-4163-b5a7-781f17183d03-kube-api-access-22d52\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.504209 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.504260 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-22d52\" (UniqueName: \"kubernetes.io/projected/532aca70-8c2f-4163-b5a7-781f17183d03-kube-api-access-22d52\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.504290 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.504351 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.504389 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-config\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.504412 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.504433 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-users\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.505961 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-config\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.511321 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-users\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.511399 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.513407 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.517815 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.519918 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.536489 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-22d52\" (UniqueName: \"kubernetes.io/projected/532aca70-8c2f-4163-b5a7-781f17183d03-kube-api-access-22d52\") pod \"default-interconnect-55bf8d5cb-r8tgq\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.574142 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:21:50 crc kubenswrapper[5119]: I0121 10:21:50.982459 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-r8tgq"] Jan 21 10:21:50 crc kubenswrapper[5119]: W0121 10:21:50.988595 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod532aca70_8c2f_4163_b5a7_781f17183d03.slice/crio-cbc1f25f7f6664595f6d9bc4a55801d311cf93cb59c7c17522ff1ea32eb50c1b WatchSource:0}: Error finding container cbc1f25f7f6664595f6d9bc4a55801d311cf93cb59c7c17522ff1ea32eb50c1b: Status 404 returned error can't find the container with id cbc1f25f7f6664595f6d9bc4a55801d311cf93cb59c7c17522ff1ea32eb50c1b Jan 21 10:21:51 crc kubenswrapper[5119]: I0121 10:21:51.481100 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" event={"ID":"532aca70-8c2f-4163-b5a7-781f17183d03","Type":"ContainerStarted","Data":"cbc1f25f7f6664595f6d9bc4a55801d311cf93cb59c7c17522ff1ea32eb50c1b"} Jan 21 10:21:58 crc kubenswrapper[5119]: I0121 10:21:58.541441 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" event={"ID":"532aca70-8c2f-4163-b5a7-781f17183d03","Type":"ContainerStarted","Data":"0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292"} Jan 21 10:21:58 crc kubenswrapper[5119]: I0121 10:21:58.560111 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" podStartSLOduration=1.227582158 podStartE2EDuration="8.560088926s" podCreationTimestamp="2026-01-21 10:21:50 +0000 UTC" firstStartedPulling="2026-01-21 10:21:50.990440985 +0000 UTC m=+1626.658532663" lastFinishedPulling="2026-01-21 10:21:58.322947753 +0000 UTC m=+1633.991039431" observedRunningTime="2026-01-21 10:21:58.555547643 +0000 UTC m=+1634.223639321" watchObservedRunningTime="2026-01-21 10:21:58.560088926 +0000 UTC m=+1634.228180604" Jan 21 10:22:00 crc kubenswrapper[5119]: I0121 10:22:00.142148 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483182-qwxqt"] Jan 21 10:22:00 crc kubenswrapper[5119]: I0121 10:22:00.155964 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483182-qwxqt" Jan 21 10:22:00 crc kubenswrapper[5119]: I0121 10:22:00.156049 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483182-qwxqt"] Jan 21 10:22:00 crc kubenswrapper[5119]: I0121 10:22:00.159113 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:22:00 crc kubenswrapper[5119]: I0121 10:22:00.159179 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:22:00 crc kubenswrapper[5119]: I0121 10:22:00.159396 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:22:00 crc kubenswrapper[5119]: I0121 10:22:00.224333 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2qzp\" (UniqueName: \"kubernetes.io/projected/4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39-kube-api-access-g2qzp\") pod \"auto-csr-approver-29483182-qwxqt\" (UID: \"4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39\") " pod="openshift-infra/auto-csr-approver-29483182-qwxqt" Jan 21 10:22:00 crc kubenswrapper[5119]: I0121 10:22:00.326060 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g2qzp\" (UniqueName: \"kubernetes.io/projected/4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39-kube-api-access-g2qzp\") pod \"auto-csr-approver-29483182-qwxqt\" (UID: \"4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39\") " pod="openshift-infra/auto-csr-approver-29483182-qwxqt" Jan 21 10:22:00 crc kubenswrapper[5119]: I0121 10:22:00.343919 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2qzp\" (UniqueName: \"kubernetes.io/projected/4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39-kube-api-access-g2qzp\") pod \"auto-csr-approver-29483182-qwxqt\" (UID: \"4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39\") " pod="openshift-infra/auto-csr-approver-29483182-qwxqt" Jan 21 10:22:00 crc kubenswrapper[5119]: I0121 10:22:00.476572 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483182-qwxqt" Jan 21 10:22:00 crc kubenswrapper[5119]: I0121 10:22:00.918575 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483182-qwxqt"] Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.562173 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483182-qwxqt" event={"ID":"4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39","Type":"ContainerStarted","Data":"3174197ac2905ad510053f2caae729644754c8f60610c87299c67647e42835ac"} Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.623515 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.628300 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.631496 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.631629 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.631763 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.631951 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.632051 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.632557 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-t2c4n\"" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.632757 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.632931 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.633087 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.633314 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.639767 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745135 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-config\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745183 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/481848f8-834a-47be-9301-1153fcbc51ef-config-out\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745204 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745227 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745273 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/481848f8-834a-47be-9301-1153fcbc51ef-tls-assets\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745308 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745349 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd78cfff-1aee-4da1-91aa-e4989cad700f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd78cfff-1aee-4da1-91aa-e4989cad700f\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745397 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgl6\" (UniqueName: \"kubernetes.io/projected/481848f8-834a-47be-9301-1153fcbc51ef-kube-api-access-stgl6\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745440 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745489 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745538 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.745572 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-web-config\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847153 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/481848f8-834a-47be-9301-1153fcbc51ef-tls-assets\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847205 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847248 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-cd78cfff-1aee-4da1-91aa-e4989cad700f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd78cfff-1aee-4da1-91aa-e4989cad700f\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847290 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-stgl6\" (UniqueName: \"kubernetes.io/projected/481848f8-834a-47be-9301-1153fcbc51ef-kube-api-access-stgl6\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847327 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847360 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847410 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847439 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-web-config\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847467 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-config\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847493 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/481848f8-834a-47be-9301-1153fcbc51ef-config-out\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847512 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.847532 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.849433 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.849518 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: E0121 10:22:01.849844 5119 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 21 10:22:01 crc kubenswrapper[5119]: E0121 10:22:01.849919 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-prometheus-proxy-tls podName:481848f8-834a-47be-9301-1153fcbc51ef nodeName:}" failed. No retries permitted until 2026-01-21 10:22:02.349899384 +0000 UTC m=+1638.017991062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "481848f8-834a-47be-9301-1153fcbc51ef") : secret "default-prometheus-proxy-tls" not found Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.850192 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.852474 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/481848f8-834a-47be-9301-1153fcbc51ef-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.854693 5119 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.854730 5119 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-cd78cfff-1aee-4da1-91aa-e4989cad700f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd78cfff-1aee-4da1-91aa-e4989cad700f\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0f92fd12638e8da02bac5b1256ed3c0f4f311a290641d63b45bc1de41d974d3b/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.854774 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/481848f8-834a-47be-9301-1153fcbc51ef-config-out\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.855120 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.866067 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/481848f8-834a-47be-9301-1153fcbc51ef-tls-assets\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.877089 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-web-config\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.877574 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-config\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.877618 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-stgl6\" (UniqueName: \"kubernetes.io/projected/481848f8-834a-47be-9301-1153fcbc51ef-kube-api-access-stgl6\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:01 crc kubenswrapper[5119]: I0121 10:22:01.912151 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-cd78cfff-1aee-4da1-91aa-e4989cad700f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd78cfff-1aee-4da1-91aa-e4989cad700f\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:02 crc kubenswrapper[5119]: I0121 10:22:02.354583 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:02 crc kubenswrapper[5119]: E0121 10:22:02.354790 5119 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 21 10:22:02 crc kubenswrapper[5119]: E0121 10:22:02.355053 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-prometheus-proxy-tls podName:481848f8-834a-47be-9301-1153fcbc51ef nodeName:}" failed. No retries permitted until 2026-01-21 10:22:03.355033417 +0000 UTC m=+1639.023125095 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "481848f8-834a-47be-9301-1153fcbc51ef") : secret "default-prometheus-proxy-tls" not found Jan 21 10:22:02 crc kubenswrapper[5119]: I0121 10:22:02.574428 5119 generic.go:358] "Generic (PLEG): container finished" podID="4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39" containerID="7cfd1c789cd69d7fb462a8d1ab369dbe2a36afb5c260aae7e576d29bc3fa6c2b" exitCode=0 Jan 21 10:22:02 crc kubenswrapper[5119]: I0121 10:22:02.574586 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483182-qwxqt" event={"ID":"4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39","Type":"ContainerDied","Data":"7cfd1c789cd69d7fb462a8d1ab369dbe2a36afb5c260aae7e576d29bc3fa6c2b"} Jan 21 10:22:03 crc kubenswrapper[5119]: I0121 10:22:03.370215 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:03 crc kubenswrapper[5119]: I0121 10:22:03.376537 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/481848f8-834a-47be-9301-1153fcbc51ef-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"481848f8-834a-47be-9301-1153fcbc51ef\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:22:03 crc kubenswrapper[5119]: I0121 10:22:03.448759 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 21 10:22:03 crc kubenswrapper[5119]: I0121 10:22:03.666039 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 10:22:03 crc kubenswrapper[5119]: I0121 10:22:03.808594 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483182-qwxqt" Jan 21 10:22:03 crc kubenswrapper[5119]: I0121 10:22:03.879241 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2qzp\" (UniqueName: \"kubernetes.io/projected/4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39-kube-api-access-g2qzp\") pod \"4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39\" (UID: \"4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39\") " Jan 21 10:22:03 crc kubenswrapper[5119]: I0121 10:22:03.886787 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39-kube-api-access-g2qzp" (OuterVolumeSpecName: "kube-api-access-g2qzp") pod "4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39" (UID: "4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39"). InnerVolumeSpecName "kube-api-access-g2qzp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:22:03 crc kubenswrapper[5119]: I0121 10:22:03.981032 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g2qzp\" (UniqueName: \"kubernetes.io/projected/4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39-kube-api-access-g2qzp\") on node \"crc\" DevicePath \"\"" Jan 21 10:22:04 crc kubenswrapper[5119]: I0121 10:22:04.603878 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483182-qwxqt" event={"ID":"4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39","Type":"ContainerDied","Data":"3174197ac2905ad510053f2caae729644754c8f60610c87299c67647e42835ac"} Jan 21 10:22:04 crc kubenswrapper[5119]: I0121 10:22:04.603908 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483182-qwxqt" Jan 21 10:22:04 crc kubenswrapper[5119]: I0121 10:22:04.603927 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3174197ac2905ad510053f2caae729644754c8f60610c87299c67647e42835ac" Jan 21 10:22:04 crc kubenswrapper[5119]: I0121 10:22:04.605230 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"481848f8-834a-47be-9301-1153fcbc51ef","Type":"ContainerStarted","Data":"9f26eb7a96c44358d43b64343bb7b7526e70c67e886d281c9c5dbdbe5bfdc9bf"} Jan 21 10:22:04 crc kubenswrapper[5119]: I0121 10:22:04.869657 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483176-vl5bq"] Jan 21 10:22:04 crc kubenswrapper[5119]: I0121 10:22:04.875352 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483176-vl5bq"] Jan 21 10:22:06 crc kubenswrapper[5119]: I0121 10:22:06.600407 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4808bb00-e516-4dc0-93b6-1acc311d4824" path="/var/lib/kubelet/pods/4808bb00-e516-4dc0-93b6-1acc311d4824/volumes" Jan 21 10:22:10 crc kubenswrapper[5119]: I0121 10:22:10.645781 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"481848f8-834a-47be-9301-1153fcbc51ef","Type":"ContainerStarted","Data":"ca0a75b1cf403741bf37e791d84447c07decd62b09363486b65e0c0731618f06"} Jan 21 10:22:11 crc kubenswrapper[5119]: I0121 10:22:11.497446 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-hnfrz"] Jan 21 10:22:11 crc kubenswrapper[5119]: I0121 10:22:11.498443 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39" containerName="oc" Jan 21 10:22:11 crc kubenswrapper[5119]: I0121 10:22:11.498466 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39" containerName="oc" Jan 21 10:22:11 crc kubenswrapper[5119]: I0121 10:22:11.498591 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39" containerName="oc" Jan 21 10:22:11 crc kubenswrapper[5119]: I0121 10:22:11.508129 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-hnfrz" Jan 21 10:22:11 crc kubenswrapper[5119]: I0121 10:22:11.525510 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-hnfrz"] Jan 21 10:22:11 crc kubenswrapper[5119]: I0121 10:22:11.579937 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd6ks\" (UniqueName: \"kubernetes.io/projected/ec93941b-ddbb-42d6-ae36-3c643b48a65b-kube-api-access-kd6ks\") pod \"default-snmp-webhook-694dc457d5-hnfrz\" (UID: \"ec93941b-ddbb-42d6-ae36-3c643b48a65b\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-hnfrz" Jan 21 10:22:11 crc kubenswrapper[5119]: I0121 10:22:11.681407 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kd6ks\" (UniqueName: \"kubernetes.io/projected/ec93941b-ddbb-42d6-ae36-3c643b48a65b-kube-api-access-kd6ks\") pod \"default-snmp-webhook-694dc457d5-hnfrz\" (UID: \"ec93941b-ddbb-42d6-ae36-3c643b48a65b\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-hnfrz" Jan 21 10:22:11 crc kubenswrapper[5119]: I0121 10:22:11.701191 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd6ks\" (UniqueName: \"kubernetes.io/projected/ec93941b-ddbb-42d6-ae36-3c643b48a65b-kube-api-access-kd6ks\") pod \"default-snmp-webhook-694dc457d5-hnfrz\" (UID: \"ec93941b-ddbb-42d6-ae36-3c643b48a65b\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-hnfrz" Jan 21 10:22:11 crc kubenswrapper[5119]: I0121 10:22:11.842544 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-hnfrz" Jan 21 10:22:12 crc kubenswrapper[5119]: I0121 10:22:12.053250 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-hnfrz"] Jan 21 10:22:12 crc kubenswrapper[5119]: I0121 10:22:12.664254 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-hnfrz" event={"ID":"ec93941b-ddbb-42d6-ae36-3c643b48a65b","Type":"ContainerStarted","Data":"a1fe860689227ce67cea976c398e07d999d654c2dedd6dba0a8ec8952f047b79"} Jan 21 10:22:15 crc kubenswrapper[5119]: I0121 10:22:15.352840 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.066178 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.066479 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.071317 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.071434 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.071484 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.071450 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.071766 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-jzp96\"" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.072168 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.144408 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-24144e67-cb1d-4f49-a280-b6d97cafe640\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24144e67-cb1d-4f49-a280-b6d97cafe640\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.144510 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.144533 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d0d6eb61-b1b6-4df6-a282-2f98000680b0-tls-assets\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.144571 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d0d6eb61-b1b6-4df6-a282-2f98000680b0-config-out\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.144594 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-config-volume\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.144636 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swzv\" (UniqueName: \"kubernetes.io/projected/d0d6eb61-b1b6-4df6-a282-2f98000680b0-kube-api-access-5swzv\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.144654 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.144669 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-web-config\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.144704 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.246406 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5swzv\" (UniqueName: \"kubernetes.io/projected/d0d6eb61-b1b6-4df6-a282-2f98000680b0-kube-api-access-5swzv\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.246537 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.246563 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-web-config\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.246627 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.246668 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-24144e67-cb1d-4f49-a280-b6d97cafe640\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24144e67-cb1d-4f49-a280-b6d97cafe640\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: E0121 10:22:16.246701 5119 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 10:22:16 crc kubenswrapper[5119]: E0121 10:22:16.246801 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls podName:d0d6eb61-b1b6-4df6-a282-2f98000680b0 nodeName:}" failed. No retries permitted until 2026-01-21 10:22:16.746778637 +0000 UTC m=+1652.414870305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "d0d6eb61-b1b6-4df6-a282-2f98000680b0") : secret "default-alertmanager-proxy-tls" not found Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.246706 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.246840 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d0d6eb61-b1b6-4df6-a282-2f98000680b0-tls-assets\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.246881 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d0d6eb61-b1b6-4df6-a282-2f98000680b0-config-out\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.246908 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-config-volume\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.251770 5119 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.251810 5119 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-24144e67-cb1d-4f49-a280-b6d97cafe640\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24144e67-cb1d-4f49-a280-b6d97cafe640\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1c37e77147235d35e03316a51ccc8eb03e2d8c5774a31811dba42428561a736/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.259391 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d0d6eb61-b1b6-4df6-a282-2f98000680b0-config-out\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.259829 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-web-config\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.259892 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-config-volume\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.260268 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.261097 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d0d6eb61-b1b6-4df6-a282-2f98000680b0-tls-assets\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.263857 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5swzv\" (UniqueName: \"kubernetes.io/projected/d0d6eb61-b1b6-4df6-a282-2f98000680b0-kube-api-access-5swzv\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.277250 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.293149 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-24144e67-cb1d-4f49-a280-b6d97cafe640\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24144e67-cb1d-4f49-a280-b6d97cafe640\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.702093 5119 generic.go:358] "Generic (PLEG): container finished" podID="481848f8-834a-47be-9301-1153fcbc51ef" containerID="ca0a75b1cf403741bf37e791d84447c07decd62b09363486b65e0c0731618f06" exitCode=0 Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.702198 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"481848f8-834a-47be-9301-1153fcbc51ef","Type":"ContainerDied","Data":"ca0a75b1cf403741bf37e791d84447c07decd62b09363486b65e0c0731618f06"} Jan 21 10:22:16 crc kubenswrapper[5119]: I0121 10:22:16.761125 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:16 crc kubenswrapper[5119]: E0121 10:22:16.761818 5119 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 10:22:16 crc kubenswrapper[5119]: E0121 10:22:16.761891 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls podName:d0d6eb61-b1b6-4df6-a282-2f98000680b0 nodeName:}" failed. No retries permitted until 2026-01-21 10:22:17.76186928 +0000 UTC m=+1653.429960958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "d0d6eb61-b1b6-4df6-a282-2f98000680b0") : secret "default-alertmanager-proxy-tls" not found Jan 21 10:22:17 crc kubenswrapper[5119]: I0121 10:22:17.780036 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:17 crc kubenswrapper[5119]: E0121 10:22:17.780268 5119 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 10:22:17 crc kubenswrapper[5119]: E0121 10:22:17.780573 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls podName:d0d6eb61-b1b6-4df6-a282-2f98000680b0 nodeName:}" failed. No retries permitted until 2026-01-21 10:22:19.780550436 +0000 UTC m=+1655.448642114 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "d0d6eb61-b1b6-4df6-a282-2f98000680b0") : secret "default-alertmanager-proxy-tls" not found Jan 21 10:22:19 crc kubenswrapper[5119]: I0121 10:22:19.728816 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-hnfrz" event={"ID":"ec93941b-ddbb-42d6-ae36-3c643b48a65b","Type":"ContainerStarted","Data":"8b4819c5fa753edad5285af9dec6f9ccf49442a56073a3d9d73607e6bd5451c6"} Jan 21 10:22:19 crc kubenswrapper[5119]: I0121 10:22:19.747803 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-694dc457d5-hnfrz" podStartSLOduration=1.6880914759999999 podStartE2EDuration="8.747785684s" podCreationTimestamp="2026-01-21 10:22:11 +0000 UTC" firstStartedPulling="2026-01-21 10:22:12.079721552 +0000 UTC m=+1647.747813230" lastFinishedPulling="2026-01-21 10:22:19.13941576 +0000 UTC m=+1654.807507438" observedRunningTime="2026-01-21 10:22:19.747498166 +0000 UTC m=+1655.415589844" watchObservedRunningTime="2026-01-21 10:22:19.747785684 +0000 UTC m=+1655.415877362" Jan 21 10:22:19 crc kubenswrapper[5119]: I0121 10:22:19.811041 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:19 crc kubenswrapper[5119]: I0121 10:22:19.834295 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0d6eb61-b1b6-4df6-a282-2f98000680b0-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d0d6eb61-b1b6-4df6-a282-2f98000680b0\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:20 crc kubenswrapper[5119]: I0121 10:22:20.054503 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 21 10:22:20 crc kubenswrapper[5119]: I0121 10:22:20.267631 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 10:22:20 crc kubenswrapper[5119]: I0121 10:22:20.746041 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d0d6eb61-b1b6-4df6-a282-2f98000680b0","Type":"ContainerStarted","Data":"0f0c60d7e037b691a04e9a35d9b33330bed280491ade1f69ee1b601c306d04a9"} Jan 21 10:22:23 crc kubenswrapper[5119]: I0121 10:22:23.764770 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d0d6eb61-b1b6-4df6-a282-2f98000680b0","Type":"ContainerStarted","Data":"c34de774659a23c2542d7b86f8f8edd21a76a6a8a0db3c3fe11fb6584e2868c3"} Jan 21 10:22:24 crc kubenswrapper[5119]: I0121 10:22:24.773524 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"481848f8-834a-47be-9301-1153fcbc51ef","Type":"ContainerStarted","Data":"4f9ea9d1e36d7868c7bcf7663efaa33070a3bff17b338236a6c7027b24210ebc"} Jan 21 10:22:26 crc kubenswrapper[5119]: I0121 10:22:26.787829 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"481848f8-834a-47be-9301-1153fcbc51ef","Type":"ContainerStarted","Data":"a778c7b7fa7b1a7992d988a49cfdb2c3bee2cb852dc4940f94f26cf9a147c414"} Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.804581 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k"] Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.822633 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k"] Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.822918 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.831325 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.831552 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.831786 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.832007 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-wnhfj\"" Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.942967 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.943046 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f2986212-b53b-4df9-9cd5-884f35c89cba-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.943117 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f2986212-b53b-4df9-9cd5-884f35c89cba-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.943154 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:28 crc kubenswrapper[5119]: I0121 10:22:28.943193 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxlf8\" (UniqueName: \"kubernetes.io/projected/f2986212-b53b-4df9-9cd5-884f35c89cba-kube-api-access-gxlf8\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.044550 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gxlf8\" (UniqueName: \"kubernetes.io/projected/f2986212-b53b-4df9-9cd5-884f35c89cba-kube-api-access-gxlf8\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.044632 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.044669 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f2986212-b53b-4df9-9cd5-884f35c89cba-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.044721 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f2986212-b53b-4df9-9cd5-884f35c89cba-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.044750 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:29 crc kubenswrapper[5119]: E0121 10:22:29.045519 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.045694 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f2986212-b53b-4df9-9cd5-884f35c89cba-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:29 crc kubenswrapper[5119]: E0121 10:22:29.045737 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-default-cloud1-coll-meter-proxy-tls podName:f2986212-b53b-4df9-9cd5-884f35c89cba nodeName:}" failed. No retries permitted until 2026-01-21 10:22:29.545697831 +0000 UTC m=+1665.213789519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" (UID: "f2986212-b53b-4df9-9cd5-884f35c89cba") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.050820 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.056020 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f2986212-b53b-4df9-9cd5-884f35c89cba-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.065264 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxlf8\" (UniqueName: \"kubernetes.io/projected/f2986212-b53b-4df9-9cd5-884f35c89cba-kube-api-access-gxlf8\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.551975 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:29 crc kubenswrapper[5119]: E0121 10:22:29.552159 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 10:22:29 crc kubenswrapper[5119]: E0121 10:22:29.552467 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-default-cloud1-coll-meter-proxy-tls podName:f2986212-b53b-4df9-9cd5-884f35c89cba nodeName:}" failed. No retries permitted until 2026-01-21 10:22:30.552447949 +0000 UTC m=+1666.220539627 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" (UID: "f2986212-b53b-4df9-9cd5-884f35c89cba") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.814171 5119 generic.go:358] "Generic (PLEG): container finished" podID="d0d6eb61-b1b6-4df6-a282-2f98000680b0" containerID="c34de774659a23c2542d7b86f8f8edd21a76a6a8a0db3c3fe11fb6584e2868c3" exitCode=0 Jan 21 10:22:29 crc kubenswrapper[5119]: I0121 10:22:29.814214 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d0d6eb61-b1b6-4df6-a282-2f98000680b0","Type":"ContainerDied","Data":"c34de774659a23c2542d7b86f8f8edd21a76a6a8a0db3c3fe11fb6584e2868c3"} Jan 21 10:22:30 crc kubenswrapper[5119]: I0121 10:22:30.570954 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:30 crc kubenswrapper[5119]: I0121 10:22:30.593245 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f2986212-b53b-4df9-9cd5-884f35c89cba-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k\" (UID: \"f2986212-b53b-4df9-9cd5-884f35c89cba\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:30 crc kubenswrapper[5119]: I0121 10:22:30.659887 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" Jan 21 10:22:31 crc kubenswrapper[5119]: I0121 10:22:31.824831 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t"] Jan 21 10:22:31 crc kubenswrapper[5119]: I0121 10:22:31.839975 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:31 crc kubenswrapper[5119]: I0121 10:22:31.843358 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Jan 21 10:22:31 crc kubenswrapper[5119]: I0121 10:22:31.844040 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t"] Jan 21 10:22:31 crc kubenswrapper[5119]: I0121 10:22:31.849428 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Jan 21 10:22:31 crc kubenswrapper[5119]: I0121 10:22:31.987093 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:31 crc kubenswrapper[5119]: I0121 10:22:31.987141 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl8hc\" (UniqueName: \"kubernetes.io/projected/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-kube-api-access-hl8hc\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:31 crc kubenswrapper[5119]: I0121 10:22:31.987162 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:31 crc kubenswrapper[5119]: I0121 10:22:31.987306 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:31 crc kubenswrapper[5119]: I0121 10:22:31.987326 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:32 crc kubenswrapper[5119]: I0121 10:22:32.090435 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:32 crc kubenswrapper[5119]: I0121 10:22:32.090590 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hl8hc\" (UniqueName: \"kubernetes.io/projected/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-kube-api-access-hl8hc\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:32 crc kubenswrapper[5119]: I0121 10:22:32.090630 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:32 crc kubenswrapper[5119]: I0121 10:22:32.090697 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:32 crc kubenswrapper[5119]: E0121 10:22:32.090720 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 10:22:32 crc kubenswrapper[5119]: E0121 10:22:32.090813 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-default-cloud1-ceil-meter-proxy-tls podName:e23122ba-6ad2-407e-aaeb-7c8f6e27ab54 nodeName:}" failed. No retries permitted until 2026-01-21 10:22:32.59078855 +0000 UTC m=+1668.258880228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" (UID: "e23122ba-6ad2-407e-aaeb-7c8f6e27ab54") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 10:22:32 crc kubenswrapper[5119]: I0121 10:22:32.090848 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:32 crc kubenswrapper[5119]: I0121 10:22:32.091117 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:32 crc kubenswrapper[5119]: I0121 10:22:32.091785 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:32 crc kubenswrapper[5119]: I0121 10:22:32.103641 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:32 crc kubenswrapper[5119]: I0121 10:22:32.112264 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl8hc\" (UniqueName: \"kubernetes.io/projected/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-kube-api-access-hl8hc\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:32 crc kubenswrapper[5119]: I0121 10:22:32.597309 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:32 crc kubenswrapper[5119]: E0121 10:22:32.597458 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 10:22:32 crc kubenswrapper[5119]: E0121 10:22:32.597597 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-default-cloud1-ceil-meter-proxy-tls podName:e23122ba-6ad2-407e-aaeb-7c8f6e27ab54 nodeName:}" failed. No retries permitted until 2026-01-21 10:22:33.597578538 +0000 UTC m=+1669.265670206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" (UID: "e23122ba-6ad2-407e-aaeb-7c8f6e27ab54") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 10:22:33 crc kubenswrapper[5119]: I0121 10:22:33.611318 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:33 crc kubenswrapper[5119]: I0121 10:22:33.631763 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e23122ba-6ad2-407e-aaeb-7c8f6e27ab54-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t\" (UID: \"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:33 crc kubenswrapper[5119]: I0121 10:22:33.674342 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" Jan 21 10:22:33 crc kubenswrapper[5119]: I0121 10:22:33.863723 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"481848f8-834a-47be-9301-1153fcbc51ef","Type":"ContainerStarted","Data":"03d41927d8b52a0c940b5acef4839b257bb585493a3137295bf89eac3245e9ff"} Jan 21 10:22:33 crc kubenswrapper[5119]: I0121 10:22:33.890040 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=3.957352455 podStartE2EDuration="33.89001941s" podCreationTimestamp="2026-01-21 10:22:00 +0000 UTC" firstStartedPulling="2026-01-21 10:22:03.671074549 +0000 UTC m=+1639.339166237" lastFinishedPulling="2026-01-21 10:22:33.603741514 +0000 UTC m=+1669.271833192" observedRunningTime="2026-01-21 10:22:33.884930532 +0000 UTC m=+1669.553022220" watchObservedRunningTime="2026-01-21 10:22:33.89001941 +0000 UTC m=+1669.558111088" Jan 21 10:22:33 crc kubenswrapper[5119]: I0121 10:22:33.935765 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k"] Jan 21 10:22:33 crc kubenswrapper[5119]: W0121 10:22:33.945401 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2986212_b53b_4df9_9cd5_884f35c89cba.slice/crio-c3c8cf75de010600f56f99c49aa825d4e8d76a8ca2e66048d61cfa99385dbd3b WatchSource:0}: Error finding container c3c8cf75de010600f56f99c49aa825d4e8d76a8ca2e66048d61cfa99385dbd3b: Status 404 returned error can't find the container with id c3c8cf75de010600f56f99c49aa825d4e8d76a8ca2e66048d61cfa99385dbd3b Jan 21 10:22:34 crc kubenswrapper[5119]: I0121 10:22:34.103775 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t"] Jan 21 10:22:34 crc kubenswrapper[5119]: W0121 10:22:34.108894 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode23122ba_6ad2_407e_aaeb_7c8f6e27ab54.slice/crio-0c3ff1311011552d2fd4db9294beab126114d9ddfc3033fb1752cae4657b9975 WatchSource:0}: Error finding container 0c3ff1311011552d2fd4db9294beab126114d9ddfc3033fb1752cae4657b9975: Status 404 returned error can't find the container with id 0c3ff1311011552d2fd4db9294beab126114d9ddfc3033fb1752cae4657b9975 Jan 21 10:22:34 crc kubenswrapper[5119]: I0121 10:22:34.870364 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" event={"ID":"f2986212-b53b-4df9-9cd5-884f35c89cba","Type":"ContainerStarted","Data":"c3c8cf75de010600f56f99c49aa825d4e8d76a8ca2e66048d61cfa99385dbd3b"} Jan 21 10:22:34 crc kubenswrapper[5119]: I0121 10:22:34.874217 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" event={"ID":"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54","Type":"ContainerStarted","Data":"0c3ff1311011552d2fd4db9294beab126114d9ddfc3033fb1752cae4657b9975"} Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.655563 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv"] Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.664955 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv"] Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.665098 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.671791 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.671961 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.755771 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l2h4\" (UniqueName: \"kubernetes.io/projected/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-kube-api-access-4l2h4\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.755922 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.756034 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.756106 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.756297 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.857452 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.857556 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.857627 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.857657 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4l2h4\" (UniqueName: \"kubernetes.io/projected/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-kube-api-access-4l2h4\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.857707 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: E0121 10:22:35.857736 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 10:22:35 crc kubenswrapper[5119]: E0121 10:22:35.858013 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-default-cloud1-sens-meter-proxy-tls podName:2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7 nodeName:}" failed. No retries permitted until 2026-01-21 10:22:36.357806923 +0000 UTC m=+1672.025898601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" (UID: "2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.858528 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.858656 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.862323 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.873807 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l2h4\" (UniqueName: \"kubernetes.io/projected/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-kube-api-access-4l2h4\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.881303 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" event={"ID":"f2986212-b53b-4df9-9cd5-884f35c89cba","Type":"ContainerStarted","Data":"678e0c1cd49d711b451dfa104ca162c6da538132156fa92509b06e5ade06c563"} Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.882613 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" event={"ID":"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54","Type":"ContainerStarted","Data":"37d0e87c8b1568b467fa79f17dbb5f2cc0a467fdb14fd22e87db5679a71e9bab"} Jan 21 10:22:35 crc kubenswrapper[5119]: I0121 10:22:35.884521 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d0d6eb61-b1b6-4df6-a282-2f98000680b0","Type":"ContainerStarted","Data":"9ba4d8a3d26c0559d6baca496664ff39a31c6fe02475f6cdb652c5330affaaea"} Jan 21 10:22:36 crc kubenswrapper[5119]: I0121 10:22:36.363857 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:36 crc kubenswrapper[5119]: E0121 10:22:36.364025 5119 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 10:22:36 crc kubenswrapper[5119]: E0121 10:22:36.364109 5119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-default-cloud1-sens-meter-proxy-tls podName:2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7 nodeName:}" failed. No retries permitted until 2026-01-21 10:22:37.364089948 +0000 UTC m=+1673.032181626 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" (UID: "2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 10:22:36 crc kubenswrapper[5119]: I0121 10:22:36.898638 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" event={"ID":"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54","Type":"ContainerStarted","Data":"35c35c470220aef170db93b9c75a68b8290ec835024a659fd8ed30d5e657fd6c"} Jan 21 10:22:36 crc kubenswrapper[5119]: I0121 10:22:36.900631 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" event={"ID":"f2986212-b53b-4df9-9cd5-884f35c89cba","Type":"ContainerStarted","Data":"87f8ff1ffa8590bccca6a619d1cecc88c173dfdeb2c92db4f78626f5bd473a38"} Jan 21 10:22:37 crc kubenswrapper[5119]: I0121 10:22:37.389831 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:37 crc kubenswrapper[5119]: I0121 10:22:37.397570 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv\" (UID: \"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:37 crc kubenswrapper[5119]: I0121 10:22:37.489273 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" Jan 21 10:22:37 crc kubenswrapper[5119]: I0121 10:22:37.914446 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d0d6eb61-b1b6-4df6-a282-2f98000680b0","Type":"ContainerStarted","Data":"ffe3d1567e93de8e626d6350f541ad415b0d23e2f5b9a728fcd2e20b29220ef1"} Jan 21 10:22:37 crc kubenswrapper[5119]: I0121 10:22:37.951505 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv"] Jan 21 10:22:38 crc kubenswrapper[5119]: I0121 10:22:38.449065 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Jan 21 10:22:38 crc kubenswrapper[5119]: I0121 10:22:38.932021 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" event={"ID":"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7","Type":"ContainerStarted","Data":"43862649d6b0127122ca03d71aff797dc7d973a32429ce79416c5f4a4539a04b"} Jan 21 10:22:42 crc kubenswrapper[5119]: I0121 10:22:42.962420 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" event={"ID":"f2986212-b53b-4df9-9cd5-884f35c89cba","Type":"ContainerStarted","Data":"28a0b2e598c13cf9de07d6e4942866b31ee0951fa0b0cfe8e31aaf774c8c03d9"} Jan 21 10:22:42 crc kubenswrapper[5119]: I0121 10:22:42.965313 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" event={"ID":"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7","Type":"ContainerStarted","Data":"869dd1ecf40fe090877f0c36d5e20c91e5b4b9dbec6a4f0514aff4e2f394182d"} Jan 21 10:22:42 crc kubenswrapper[5119]: I0121 10:22:42.968280 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" event={"ID":"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54","Type":"ContainerStarted","Data":"7767f52b6377f324eed8706f21809a9291136e89f403f85932faa70f4e2333d5"} Jan 21 10:22:42 crc kubenswrapper[5119]: I0121 10:22:42.971404 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d0d6eb61-b1b6-4df6-a282-2f98000680b0","Type":"ContainerStarted","Data":"d330285c02001fbd0bacd9991f08980b1a2d2b03d52d2440b07cf689b3183075"} Jan 21 10:22:42 crc kubenswrapper[5119]: I0121 10:22:42.984811 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm"] Jan 21 10:22:42 crc kubenswrapper[5119]: I0121 10:22:42.989233 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" podStartSLOduration=6.411495307 podStartE2EDuration="14.989213137s" podCreationTimestamp="2026-01-21 10:22:28 +0000 UTC" firstStartedPulling="2026-01-21 10:22:33.947290464 +0000 UTC m=+1669.615382142" lastFinishedPulling="2026-01-21 10:22:42.525008304 +0000 UTC m=+1678.193099972" observedRunningTime="2026-01-21 10:22:42.98455084 +0000 UTC m=+1678.652642508" watchObservedRunningTime="2026-01-21 10:22:42.989213137 +0000 UTC m=+1678.657304815" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.020836 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" podStartSLOduration=3.629121842 podStartE2EDuration="12.020807194s" podCreationTimestamp="2026-01-21 10:22:31 +0000 UTC" firstStartedPulling="2026-01-21 10:22:34.112058384 +0000 UTC m=+1669.780150062" lastFinishedPulling="2026-01-21 10:22:42.503743736 +0000 UTC m=+1678.171835414" observedRunningTime="2026-01-21 10:22:43.015167621 +0000 UTC m=+1678.683259299" watchObservedRunningTime="2026-01-21 10:22:43.020807194 +0000 UTC m=+1678.688898872" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.045482 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=16.315260173 podStartE2EDuration="29.045463883s" podCreationTimestamp="2026-01-21 10:22:14 +0000 UTC" firstStartedPulling="2026-01-21 10:22:29.815468364 +0000 UTC m=+1665.483560042" lastFinishedPulling="2026-01-21 10:22:42.545672074 +0000 UTC m=+1678.213763752" observedRunningTime="2026-01-21 10:22:43.040875818 +0000 UTC m=+1678.708967496" watchObservedRunningTime="2026-01-21 10:22:43.045463883 +0000 UTC m=+1678.713555561" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.257396 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm"] Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.257619 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.260583 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.262043 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.419883 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5ab338f9-d819-4ea4-9298-e9b521d0d494-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.420349 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/5ab338f9-d819-4ea4-9298-e9b521d0d494-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.420466 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stxbb\" (UniqueName: \"kubernetes.io/projected/5ab338f9-d819-4ea4-9298-e9b521d0d494-kube-api-access-stxbb\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.420516 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ab338f9-d819-4ea4-9298-e9b521d0d494-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.522383 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5ab338f9-d819-4ea4-9298-e9b521d0d494-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.523365 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/5ab338f9-d819-4ea4-9298-e9b521d0d494-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.523428 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-stxbb\" (UniqueName: \"kubernetes.io/projected/5ab338f9-d819-4ea4-9298-e9b521d0d494-kube-api-access-stxbb\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.523460 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ab338f9-d819-4ea4-9298-e9b521d0d494-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.523757 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ab338f9-d819-4ea4-9298-e9b521d0d494-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.523245 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5ab338f9-d819-4ea4-9298-e9b521d0d494-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.530764 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/5ab338f9-d819-4ea4-9298-e9b521d0d494-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.544135 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-stxbb\" (UniqueName: \"kubernetes.io/projected/5ab338f9-d819-4ea4-9298-e9b521d0d494-kube-api-access-stxbb\") pod \"default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm\" (UID: \"5ab338f9-d819-4ea4-9298-e9b521d0d494\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.571804 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" Jan 21 10:22:43 crc kubenswrapper[5119]: I0121 10:22:43.982270 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" event={"ID":"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7","Type":"ContainerStarted","Data":"c1c1a41faee89b7076259292b0acfcb326d67b4842c00e2c78d895d976526b44"} Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.039066 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm"] Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.060522 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z"] Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.072536 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.074797 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z"] Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.074894 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.234799 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8472t\" (UniqueName: \"kubernetes.io/projected/89079dd8-483b-44ef-81be-6ab712709669-kube-api-access-8472t\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.235213 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/89079dd8-483b-44ef-81be-6ab712709669-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.235268 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/89079dd8-483b-44ef-81be-6ab712709669-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.235346 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/89079dd8-483b-44ef-81be-6ab712709669-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.337472 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/89079dd8-483b-44ef-81be-6ab712709669-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.337528 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/89079dd8-483b-44ef-81be-6ab712709669-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.337555 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/89079dd8-483b-44ef-81be-6ab712709669-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.337628 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8472t\" (UniqueName: \"kubernetes.io/projected/89079dd8-483b-44ef-81be-6ab712709669-kube-api-access-8472t\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.338072 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/89079dd8-483b-44ef-81be-6ab712709669-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.338625 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/89079dd8-483b-44ef-81be-6ab712709669-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.345428 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/89079dd8-483b-44ef-81be-6ab712709669-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.355074 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8472t\" (UniqueName: \"kubernetes.io/projected/89079dd8-483b-44ef-81be-6ab712709669-kube-api-access-8472t\") pod \"default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z\" (UID: \"89079dd8-483b-44ef-81be-6ab712709669\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.389560 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.664876 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z"] Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.992020 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" event={"ID":"89079dd8-483b-44ef-81be-6ab712709669","Type":"ContainerStarted","Data":"0f23bc931cfd13d002a35975489be7c0b9383ff58cef17f26c3f6fdf47dc07aa"} Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.992407 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" event={"ID":"89079dd8-483b-44ef-81be-6ab712709669","Type":"ContainerStarted","Data":"62b258333dd14271545dd415198a793c7a0dd77f894fa4680d7d0bbcc70f6040"} Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.996346 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" event={"ID":"5ab338f9-d819-4ea4-9298-e9b521d0d494","Type":"ContainerStarted","Data":"8cbc293d27141b8899fe5c5fc238d9f31386a2d98ba9c7c2d2ac48e9a6eca45f"} Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.996393 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" event={"ID":"5ab338f9-d819-4ea4-9298-e9b521d0d494","Type":"ContainerStarted","Data":"3d9f5240c4015738176e6d8315bfe3be7cdf04f4a0c84a2e0536e45e433f34bf"} Jan 21 10:22:44 crc kubenswrapper[5119]: I0121 10:22:44.996409 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" event={"ID":"5ab338f9-d819-4ea4-9298-e9b521d0d494","Type":"ContainerStarted","Data":"e3d576af64cc746947bb4af668f9490198b8cfe22b104cda8e2d3a55919c703f"} Jan 21 10:22:45 crc kubenswrapper[5119]: I0121 10:22:45.003186 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" event={"ID":"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7","Type":"ContainerStarted","Data":"b5f99bf5b4cbdac4cc850dbe002df914a025c4d4bcf4e503c5227a04e407bf26"} Jan 21 10:22:45 crc kubenswrapper[5119]: I0121 10:22:45.022458 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" podStartSLOduration=2.694650572 podStartE2EDuration="3.022432384s" podCreationTimestamp="2026-01-21 10:22:42 +0000 UTC" firstStartedPulling="2026-01-21 10:22:44.024366449 +0000 UTC m=+1679.692458127" lastFinishedPulling="2026-01-21 10:22:44.352148261 +0000 UTC m=+1680.020239939" observedRunningTime="2026-01-21 10:22:45.016707809 +0000 UTC m=+1680.684799487" watchObservedRunningTime="2026-01-21 10:22:45.022432384 +0000 UTC m=+1680.690524072" Jan 21 10:22:45 crc kubenswrapper[5119]: I0121 10:22:45.035095 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" podStartSLOduration=4.074805876 podStartE2EDuration="10.035076918s" podCreationTimestamp="2026-01-21 10:22:35 +0000 UTC" firstStartedPulling="2026-01-21 10:22:37.964391681 +0000 UTC m=+1673.632483359" lastFinishedPulling="2026-01-21 10:22:43.924662723 +0000 UTC m=+1679.592754401" observedRunningTime="2026-01-21 10:22:45.030299118 +0000 UTC m=+1680.698390796" watchObservedRunningTime="2026-01-21 10:22:45.035076918 +0000 UTC m=+1680.703168606" Jan 21 10:22:46 crc kubenswrapper[5119]: I0121 10:22:46.013404 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" event={"ID":"89079dd8-483b-44ef-81be-6ab712709669","Type":"ContainerStarted","Data":"a841da0451b8018dd9450a8a688007afc9dffc0da8614fd4c974a86d88f435ee"} Jan 21 10:22:46 crc kubenswrapper[5119]: I0121 10:22:46.030925 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" podStartSLOduration=1.710148462 podStartE2EDuration="2.030908203s" podCreationTimestamp="2026-01-21 10:22:44 +0000 UTC" firstStartedPulling="2026-01-21 10:22:44.681424344 +0000 UTC m=+1680.349516012" lastFinishedPulling="2026-01-21 10:22:45.002184075 +0000 UTC m=+1680.670275753" observedRunningTime="2026-01-21 10:22:46.029175006 +0000 UTC m=+1681.697266684" watchObservedRunningTime="2026-01-21 10:22:46.030908203 +0000 UTC m=+1681.698999881" Jan 21 10:22:47 crc kubenswrapper[5119]: I0121 10:22:47.995491 5119 scope.go:117] "RemoveContainer" containerID="dd309237ffbf42fe3ea520e4c3e8752c639a229afa65085759f64972a1240e98" Jan 21 10:22:48 crc kubenswrapper[5119]: I0121 10:22:48.020092 5119 scope.go:117] "RemoveContainer" containerID="d2582b5f1d51b696695d96908c4f45bacb172c32addb9a28eecf8fd1638cba16" Jan 21 10:22:48 crc kubenswrapper[5119]: I0121 10:22:48.101083 5119 scope.go:117] "RemoveContainer" containerID="39fedb1d3560a25e86d2ade4af724fcb9c1d1dd443669904555fcc144fceba77" Jan 21 10:22:48 crc kubenswrapper[5119]: I0121 10:22:48.448997 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 21 10:22:48 crc kubenswrapper[5119]: I0121 10:22:48.493180 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 21 10:22:49 crc kubenswrapper[5119]: I0121 10:22:49.065158 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 21 10:23:01 crc kubenswrapper[5119]: I0121 10:23:01.668363 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-r8tgq"] Jan 21 10:23:01 crc kubenswrapper[5119]: I0121 10:23:01.669434 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" podUID="532aca70-8c2f-4163-b5a7-781f17183d03" containerName="default-interconnect" containerID="cri-o://0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292" gracePeriod=30 Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.106201 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.143884 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-hzlwg"] Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.144691 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="532aca70-8c2f-4163-b5a7-781f17183d03" containerName="default-interconnect" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.144715 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="532aca70-8c2f-4163-b5a7-781f17183d03" containerName="default-interconnect" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.144820 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="532aca70-8c2f-4163-b5a7-781f17183d03" containerName="default-interconnect" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.148155 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.162427 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-hzlwg"] Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.163568 5119 generic.go:358] "Generic (PLEG): container finished" podID="532aca70-8c2f-4163-b5a7-781f17183d03" containerID="0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292" exitCode=0 Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.163855 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" event={"ID":"532aca70-8c2f-4163-b5a7-781f17183d03","Type":"ContainerDied","Data":"0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292"} Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.163887 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" event={"ID":"532aca70-8c2f-4163-b5a7-781f17183d03","Type":"ContainerDied","Data":"cbc1f25f7f6664595f6d9bc4a55801d311cf93cb59c7c17522ff1ea32eb50c1b"} Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.163907 5119 scope.go:117] "RemoveContainer" containerID="0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.164199 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-r8tgq" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.171632 5119 generic.go:358] "Generic (PLEG): container finished" podID="89079dd8-483b-44ef-81be-6ab712709669" containerID="0f23bc931cfd13d002a35975489be7c0b9383ff58cef17f26c3f6fdf47dc07aa" exitCode=0 Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.171959 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" event={"ID":"89079dd8-483b-44ef-81be-6ab712709669","Type":"ContainerDied","Data":"0f23bc931cfd13d002a35975489be7c0b9383ff58cef17f26c3f6fdf47dc07aa"} Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.172387 5119 scope.go:117] "RemoveContainer" containerID="0f23bc931cfd13d002a35975489be7c0b9383ff58cef17f26c3f6fdf47dc07aa" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.201427 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-credentials\") pod \"532aca70-8c2f-4163-b5a7-781f17183d03\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.201558 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-credentials\") pod \"532aca70-8c2f-4163-b5a7-781f17183d03\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.201586 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-ca\") pod \"532aca70-8c2f-4163-b5a7-781f17183d03\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.201732 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22d52\" (UniqueName: \"kubernetes.io/projected/532aca70-8c2f-4163-b5a7-781f17183d03-kube-api-access-22d52\") pod \"532aca70-8c2f-4163-b5a7-781f17183d03\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.201758 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-users\") pod \"532aca70-8c2f-4163-b5a7-781f17183d03\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.201806 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-config\") pod \"532aca70-8c2f-4163-b5a7-781f17183d03\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.201820 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-ca\") pod \"532aca70-8c2f-4163-b5a7-781f17183d03\" (UID: \"532aca70-8c2f-4163-b5a7-781f17183d03\") " Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.201947 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.201983 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qn4z\" (UniqueName: \"kubernetes.io/projected/0661936f-76da-4b08-818a-352bba8bad5c-kube-api-access-9qn4z\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.202033 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/0661936f-76da-4b08-818a-352bba8bad5c-sasl-config\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.202071 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.202097 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-sasl-users\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.202148 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.202166 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.205357 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "532aca70-8c2f-4163-b5a7-781f17183d03" (UID: "532aca70-8c2f-4163-b5a7-781f17183d03"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.206038 5119 scope.go:117] "RemoveContainer" containerID="0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292" Jan 21 10:23:02 crc kubenswrapper[5119]: E0121 10:23:02.209278 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292\": container with ID starting with 0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292 not found: ID does not exist" containerID="0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.209342 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292"} err="failed to get container status \"0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292\": rpc error: code = NotFound desc = could not find container \"0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292\": container with ID starting with 0a0106ee20f08e71631044901e2ee587bc8795ebd6f41973b3fca8ca6819b292 not found: ID does not exist" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.230416 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/532aca70-8c2f-4163-b5a7-781f17183d03-kube-api-access-22d52" (OuterVolumeSpecName: "kube-api-access-22d52") pod "532aca70-8c2f-4163-b5a7-781f17183d03" (UID: "532aca70-8c2f-4163-b5a7-781f17183d03"). InnerVolumeSpecName "kube-api-access-22d52". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.230483 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "532aca70-8c2f-4163-b5a7-781f17183d03" (UID: "532aca70-8c2f-4163-b5a7-781f17183d03"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.231718 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "532aca70-8c2f-4163-b5a7-781f17183d03" (UID: "532aca70-8c2f-4163-b5a7-781f17183d03"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.235791 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "532aca70-8c2f-4163-b5a7-781f17183d03" (UID: "532aca70-8c2f-4163-b5a7-781f17183d03"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.235939 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "532aca70-8c2f-4163-b5a7-781f17183d03" (UID: "532aca70-8c2f-4163-b5a7-781f17183d03"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.236004 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "532aca70-8c2f-4163-b5a7-781f17183d03" (UID: "532aca70-8c2f-4163-b5a7-781f17183d03"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.302865 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303250 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303313 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303341 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9qn4z\" (UniqueName: \"kubernetes.io/projected/0661936f-76da-4b08-818a-352bba8bad5c-kube-api-access-9qn4z\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303397 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/0661936f-76da-4b08-818a-352bba8bad5c-sasl-config\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303441 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303473 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-sasl-users\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303535 5119 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303551 5119 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303565 5119 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303579 5119 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303592 5119 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303624 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-22d52\" (UniqueName: \"kubernetes.io/projected/532aca70-8c2f-4163-b5a7-781f17183d03-kube-api-access-22d52\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.303637 5119 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/532aca70-8c2f-4163-b5a7-781f17183d03-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.304798 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/0661936f-76da-4b08-818a-352bba8bad5c-sasl-config\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.308821 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.308834 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.309966 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-sasl-users\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.311210 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.311743 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/0661936f-76da-4b08-818a-352bba8bad5c-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.323098 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qn4z\" (UniqueName: \"kubernetes.io/projected/0661936f-76da-4b08-818a-352bba8bad5c-kube-api-access-9qn4z\") pod \"default-interconnect-55bf8d5cb-hzlwg\" (UID: \"0661936f-76da-4b08-818a-352bba8bad5c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.470944 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.492280 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-r8tgq"] Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.496895 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-r8tgq"] Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.603587 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="532aca70-8c2f-4163-b5a7-781f17183d03" path="/var/lib/kubelet/pods/532aca70-8c2f-4163-b5a7-781f17183d03/volumes" Jan 21 10:23:02 crc kubenswrapper[5119]: I0121 10:23:02.653960 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-hzlwg"] Jan 21 10:23:02 crc kubenswrapper[5119]: W0121 10:23:02.660963 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0661936f_76da_4b08_818a_352bba8bad5c.slice/crio-cf8a206b318cfe3145191ab2e960ea03c0148c36888af6097fd4f732f21581a5 WatchSource:0}: Error finding container cf8a206b318cfe3145191ab2e960ea03c0148c36888af6097fd4f732f21581a5: Status 404 returned error can't find the container with id cf8a206b318cfe3145191ab2e960ea03c0148c36888af6097fd4f732f21581a5 Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.180360 5119 generic.go:358] "Generic (PLEG): container finished" podID="f2986212-b53b-4df9-9cd5-884f35c89cba" containerID="87f8ff1ffa8590bccca6a619d1cecc88c173dfdeb2c92db4f78626f5bd473a38" exitCode=0 Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.180431 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" event={"ID":"f2986212-b53b-4df9-9cd5-884f35c89cba","Type":"ContainerDied","Data":"87f8ff1ffa8590bccca6a619d1cecc88c173dfdeb2c92db4f78626f5bd473a38"} Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.181030 5119 scope.go:117] "RemoveContainer" containerID="87f8ff1ffa8590bccca6a619d1cecc88c173dfdeb2c92db4f78626f5bd473a38" Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.184941 5119 generic.go:358] "Generic (PLEG): container finished" podID="5ab338f9-d819-4ea4-9298-e9b521d0d494" containerID="3d9f5240c4015738176e6d8315bfe3be7cdf04f4a0c84a2e0536e45e433f34bf" exitCode=0 Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.185026 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" event={"ID":"5ab338f9-d819-4ea4-9298-e9b521d0d494","Type":"ContainerDied","Data":"3d9f5240c4015738176e6d8315bfe3be7cdf04f4a0c84a2e0536e45e433f34bf"} Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.185739 5119 scope.go:117] "RemoveContainer" containerID="3d9f5240c4015738176e6d8315bfe3be7cdf04f4a0c84a2e0536e45e433f34bf" Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.193563 5119 generic.go:358] "Generic (PLEG): container finished" podID="2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7" containerID="c1c1a41faee89b7076259292b0acfcb326d67b4842c00e2c78d895d976526b44" exitCode=0 Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.193636 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" event={"ID":"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7","Type":"ContainerDied","Data":"c1c1a41faee89b7076259292b0acfcb326d67b4842c00e2c78d895d976526b44"} Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.195472 5119 scope.go:117] "RemoveContainer" containerID="c1c1a41faee89b7076259292b0acfcb326d67b4842c00e2c78d895d976526b44" Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.212723 5119 generic.go:358] "Generic (PLEG): container finished" podID="e23122ba-6ad2-407e-aaeb-7c8f6e27ab54" containerID="35c35c470220aef170db93b9c75a68b8290ec835024a659fd8ed30d5e657fd6c" exitCode=0 Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.212816 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" event={"ID":"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54","Type":"ContainerDied","Data":"35c35c470220aef170db93b9c75a68b8290ec835024a659fd8ed30d5e657fd6c"} Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.213277 5119 scope.go:117] "RemoveContainer" containerID="35c35c470220aef170db93b9c75a68b8290ec835024a659fd8ed30d5e657fd6c" Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.227340 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" event={"ID":"0661936f-76da-4b08-818a-352bba8bad5c","Type":"ContainerStarted","Data":"8c5301242aff9d9ac79712bf706a976205f9b95e2ecf97150bc2ce72d36e044d"} Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.227394 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" event={"ID":"0661936f-76da-4b08-818a-352bba8bad5c","Type":"ContainerStarted","Data":"cf8a206b318cfe3145191ab2e960ea03c0148c36888af6097fd4f732f21581a5"} Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.235171 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" event={"ID":"89079dd8-483b-44ef-81be-6ab712709669","Type":"ContainerStarted","Data":"f4085cfe15a01080c9832dfc848e1605b258f1824808bd4f07e2feaf578ac463"} Jan 21 10:23:03 crc kubenswrapper[5119]: I0121 10:23:03.292576 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-hzlwg" podStartSLOduration=2.29255443 podStartE2EDuration="2.29255443s" podCreationTimestamp="2026-01-21 10:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:23:03.291951844 +0000 UTC m=+1698.960043532" watchObservedRunningTime="2026-01-21 10:23:03.29255443 +0000 UTC m=+1698.960646108" Jan 21 10:23:04 crc kubenswrapper[5119]: I0121 10:23:04.244389 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" event={"ID":"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54","Type":"ContainerStarted","Data":"d6be1d6d743895dfc4cf84d4adc3beb93423fc5799d0b4bffea18f9661e4c6f6"} Jan 21 10:23:04 crc kubenswrapper[5119]: I0121 10:23:04.246638 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" event={"ID":"89079dd8-483b-44ef-81be-6ab712709669","Type":"ContainerDied","Data":"f4085cfe15a01080c9832dfc848e1605b258f1824808bd4f07e2feaf578ac463"} Jan 21 10:23:04 crc kubenswrapper[5119]: I0121 10:23:04.246591 5119 generic.go:358] "Generic (PLEG): container finished" podID="89079dd8-483b-44ef-81be-6ab712709669" containerID="f4085cfe15a01080c9832dfc848e1605b258f1824808bd4f07e2feaf578ac463" exitCode=0 Jan 21 10:23:04 crc kubenswrapper[5119]: I0121 10:23:04.246694 5119 scope.go:117] "RemoveContainer" containerID="0f23bc931cfd13d002a35975489be7c0b9383ff58cef17f26c3f6fdf47dc07aa" Jan 21 10:23:04 crc kubenswrapper[5119]: I0121 10:23:04.246996 5119 scope.go:117] "RemoveContainer" containerID="f4085cfe15a01080c9832dfc848e1605b258f1824808bd4f07e2feaf578ac463" Jan 21 10:23:04 crc kubenswrapper[5119]: E0121 10:23:04.247235 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z_service-telemetry(89079dd8-483b-44ef-81be-6ab712709669)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" podUID="89079dd8-483b-44ef-81be-6ab712709669" Jan 21 10:23:04 crc kubenswrapper[5119]: I0121 10:23:04.256072 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" event={"ID":"f2986212-b53b-4df9-9cd5-884f35c89cba","Type":"ContainerStarted","Data":"4205ebc442180a2f503135d89aae8045b919675dc55ab4cfb3ecf2e3bbc97577"} Jan 21 10:23:04 crc kubenswrapper[5119]: I0121 10:23:04.262689 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" event={"ID":"5ab338f9-d819-4ea4-9298-e9b521d0d494","Type":"ContainerStarted","Data":"b3efb26845161d27399ea77845cad2c1524875e9960a6c344d0d602df7e5951e"} Jan 21 10:23:04 crc kubenswrapper[5119]: I0121 10:23:04.267230 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" event={"ID":"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7","Type":"ContainerStarted","Data":"3abe897a5b1044b6c43f53de69a8a995c3b3071d86c20c3925f4fe049a303695"} Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.292384 5119 generic.go:358] "Generic (PLEG): container finished" podID="f2986212-b53b-4df9-9cd5-884f35c89cba" containerID="4205ebc442180a2f503135d89aae8045b919675dc55ab4cfb3ecf2e3bbc97577" exitCode=0 Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.292457 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" event={"ID":"f2986212-b53b-4df9-9cd5-884f35c89cba","Type":"ContainerDied","Data":"4205ebc442180a2f503135d89aae8045b919675dc55ab4cfb3ecf2e3bbc97577"} Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.292528 5119 scope.go:117] "RemoveContainer" containerID="87f8ff1ffa8590bccca6a619d1cecc88c173dfdeb2c92db4f78626f5bd473a38" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.293135 5119 scope.go:117] "RemoveContainer" containerID="4205ebc442180a2f503135d89aae8045b919675dc55ab4cfb3ecf2e3bbc97577" Jan 21 10:23:05 crc kubenswrapper[5119]: E0121 10:23:05.293420 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k_service-telemetry(f2986212-b53b-4df9-9cd5-884f35c89cba)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" podUID="f2986212-b53b-4df9-9cd5-884f35c89cba" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.308969 5119 generic.go:358] "Generic (PLEG): container finished" podID="5ab338f9-d819-4ea4-9298-e9b521d0d494" containerID="b3efb26845161d27399ea77845cad2c1524875e9960a6c344d0d602df7e5951e" exitCode=0 Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.309123 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" event={"ID":"5ab338f9-d819-4ea4-9298-e9b521d0d494","Type":"ContainerDied","Data":"b3efb26845161d27399ea77845cad2c1524875e9960a6c344d0d602df7e5951e"} Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.309562 5119 scope.go:117] "RemoveContainer" containerID="b3efb26845161d27399ea77845cad2c1524875e9960a6c344d0d602df7e5951e" Jan 21 10:23:05 crc kubenswrapper[5119]: E0121 10:23:05.309868 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm_service-telemetry(5ab338f9-d819-4ea4-9298-e9b521d0d494)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" podUID="5ab338f9-d819-4ea4-9298-e9b521d0d494" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.330270 5119 generic.go:358] "Generic (PLEG): container finished" podID="2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7" containerID="3abe897a5b1044b6c43f53de69a8a995c3b3071d86c20c3925f4fe049a303695" exitCode=0 Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.330460 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" event={"ID":"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7","Type":"ContainerDied","Data":"3abe897a5b1044b6c43f53de69a8a995c3b3071d86c20c3925f4fe049a303695"} Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.343542 5119 scope.go:117] "RemoveContainer" containerID="3abe897a5b1044b6c43f53de69a8a995c3b3071d86c20c3925f4fe049a303695" Jan 21 10:23:05 crc kubenswrapper[5119]: E0121 10:23:05.350157 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv_service-telemetry(2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" podUID="2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.357642 5119 scope.go:117] "RemoveContainer" containerID="3d9f5240c4015738176e6d8315bfe3be7cdf04f4a0c84a2e0536e45e433f34bf" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.358072 5119 generic.go:358] "Generic (PLEG): container finished" podID="e23122ba-6ad2-407e-aaeb-7c8f6e27ab54" containerID="d6be1d6d743895dfc4cf84d4adc3beb93423fc5799d0b4bffea18f9661e4c6f6" exitCode=0 Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.358266 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" event={"ID":"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54","Type":"ContainerDied","Data":"d6be1d6d743895dfc4cf84d4adc3beb93423fc5799d0b4bffea18f9661e4c6f6"} Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.358849 5119 scope.go:117] "RemoveContainer" containerID="d6be1d6d743895dfc4cf84d4adc3beb93423fc5799d0b4bffea18f9661e4c6f6" Jan 21 10:23:05 crc kubenswrapper[5119]: E0121 10:23:05.359158 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t_service-telemetry(e23122ba-6ad2-407e-aaeb-7c8f6e27ab54)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" podUID="e23122ba-6ad2-407e-aaeb-7c8f6e27ab54" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.424542 5119 scope.go:117] "RemoveContainer" containerID="c1c1a41faee89b7076259292b0acfcb326d67b4842c00e2c78d895d976526b44" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.482767 5119 scope.go:117] "RemoveContainer" containerID="35c35c470220aef170db93b9c75a68b8290ec835024a659fd8ed30d5e657fd6c" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.534982 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.552669 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.556740 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.558942 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.559185 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.656541 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/37e04609-ba47-4c81-bd8f-40f2342d42d5-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"37e04609-ba47-4c81-bd8f-40f2342d42d5\") " pod="service-telemetry/qdr-test" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.656636 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/37e04609-ba47-4c81-bd8f-40f2342d42d5-qdr-test-config\") pod \"qdr-test\" (UID: \"37e04609-ba47-4c81-bd8f-40f2342d42d5\") " pod="service-telemetry/qdr-test" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.656668 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfv4t\" (UniqueName: \"kubernetes.io/projected/37e04609-ba47-4c81-bd8f-40f2342d42d5-kube-api-access-lfv4t\") pod \"qdr-test\" (UID: \"37e04609-ba47-4c81-bd8f-40f2342d42d5\") " pod="service-telemetry/qdr-test" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.758745 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/37e04609-ba47-4c81-bd8f-40f2342d42d5-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"37e04609-ba47-4c81-bd8f-40f2342d42d5\") " pod="service-telemetry/qdr-test" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.758827 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/37e04609-ba47-4c81-bd8f-40f2342d42d5-qdr-test-config\") pod \"qdr-test\" (UID: \"37e04609-ba47-4c81-bd8f-40f2342d42d5\") " pod="service-telemetry/qdr-test" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.758858 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lfv4t\" (UniqueName: \"kubernetes.io/projected/37e04609-ba47-4c81-bd8f-40f2342d42d5-kube-api-access-lfv4t\") pod \"qdr-test\" (UID: \"37e04609-ba47-4c81-bd8f-40f2342d42d5\") " pod="service-telemetry/qdr-test" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.761188 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/37e04609-ba47-4c81-bd8f-40f2342d42d5-qdr-test-config\") pod \"qdr-test\" (UID: \"37e04609-ba47-4c81-bd8f-40f2342d42d5\") " pod="service-telemetry/qdr-test" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.776411 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/37e04609-ba47-4c81-bd8f-40f2342d42d5-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"37e04609-ba47-4c81-bd8f-40f2342d42d5\") " pod="service-telemetry/qdr-test" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.782307 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfv4t\" (UniqueName: \"kubernetes.io/projected/37e04609-ba47-4c81-bd8f-40f2342d42d5-kube-api-access-lfv4t\") pod \"qdr-test\" (UID: \"37e04609-ba47-4c81-bd8f-40f2342d42d5\") " pod="service-telemetry/qdr-test" Jan 21 10:23:05 crc kubenswrapper[5119]: I0121 10:23:05.895397 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 21 10:23:06 crc kubenswrapper[5119]: I0121 10:23:06.359540 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 21 10:23:06 crc kubenswrapper[5119]: I0121 10:23:06.379577 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"37e04609-ba47-4c81-bd8f-40f2342d42d5","Type":"ContainerStarted","Data":"f12fc471ff7a24a6f15c526c3cd5588e357fd773c5b121ebf890c0f7db99399f"} Jan 21 10:23:15 crc kubenswrapper[5119]: I0121 10:23:15.590908 5119 scope.go:117] "RemoveContainer" containerID="f4085cfe15a01080c9832dfc848e1605b258f1824808bd4f07e2feaf578ac463" Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.490991 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z" event={"ID":"89079dd8-483b-44ef-81be-6ab712709669","Type":"ContainerStarted","Data":"09efea1b2964b6ced1d2471bc009d0f8308d53dd3daf1e552218935686bc5c6f"} Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.494154 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"37e04609-ba47-4c81-bd8f-40f2342d42d5","Type":"ContainerStarted","Data":"864fb3962eeea1154405e8ba70f35bc979d7b17ff99f4a205000dbb9de9bae7a"} Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.526209 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.414927263 podStartE2EDuration="11.526184248s" podCreationTimestamp="2026-01-21 10:23:05 +0000 UTC" firstStartedPulling="2026-01-21 10:23:06.362987096 +0000 UTC m=+1702.031078774" lastFinishedPulling="2026-01-21 10:23:15.474244081 +0000 UTC m=+1711.142335759" observedRunningTime="2026-01-21 10:23:16.520110843 +0000 UTC m=+1712.188202531" watchObservedRunningTime="2026-01-21 10:23:16.526184248 +0000 UTC m=+1712.194275936" Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.898245 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-srjmg"] Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.911418 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.914631 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.915105 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.915228 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.915533 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.916252 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.918619 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Jan 21 10:23:16 crc kubenswrapper[5119]: I0121 10:23:16.922698 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-srjmg"] Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.018097 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-healthcheck-log\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.018166 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.018191 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqqtt\" (UniqueName: \"kubernetes.io/projected/29b45a36-1894-437c-aa94-b91f1008f40f-kube-api-access-dqqtt\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.018250 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-sensubility-config\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.018268 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.018404 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-config\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.018471 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.119590 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-healthcheck-log\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.119657 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.119677 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqtt\" (UniqueName: \"kubernetes.io/projected/29b45a36-1894-437c-aa94-b91f1008f40f-kube-api-access-dqqtt\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.119702 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-sensubility-config\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.119716 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.119759 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-config\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.119783 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.120716 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.121266 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-sensubility-config\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.122502 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.122737 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.122939 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-config\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.123561 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-healthcheck-log\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.149777 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqtt\" (UniqueName: \"kubernetes.io/projected/29b45a36-1894-437c-aa94-b91f1008f40f-kube-api-access-dqqtt\") pod \"stf-smoketest-smoke1-srjmg\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.280099 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.346824 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.360971 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.361136 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.423436 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krh6d\" (UniqueName: \"kubernetes.io/projected/9656acf6-a085-424e-9bed-bfc45b74afc5-kube-api-access-krh6d\") pod \"curl\" (UID: \"9656acf6-a085-424e-9bed-bfc45b74afc5\") " pod="service-telemetry/curl" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.525253 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-krh6d\" (UniqueName: \"kubernetes.io/projected/9656acf6-a085-424e-9bed-bfc45b74afc5-kube-api-access-krh6d\") pod \"curl\" (UID: \"9656acf6-a085-424e-9bed-bfc45b74afc5\") " pod="service-telemetry/curl" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.546930 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-krh6d\" (UniqueName: \"kubernetes.io/projected/9656acf6-a085-424e-9bed-bfc45b74afc5-kube-api-access-krh6d\") pod \"curl\" (UID: \"9656acf6-a085-424e-9bed-bfc45b74afc5\") " pod="service-telemetry/curl" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.706664 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.766970 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-srjmg"] Jan 21 10:23:17 crc kubenswrapper[5119]: I0121 10:23:17.912349 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 21 10:23:17 crc kubenswrapper[5119]: W0121 10:23:17.912895 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9656acf6_a085_424e_9bed_bfc45b74afc5.slice/crio-9f3c0a9446f8f75f24898c96f7c2c937848e6813f1ed40db98fd56cad156f85c WatchSource:0}: Error finding container 9f3c0a9446f8f75f24898c96f7c2c937848e6813f1ed40db98fd56cad156f85c: Status 404 returned error can't find the container with id 9f3c0a9446f8f75f24898c96f7c2c937848e6813f1ed40db98fd56cad156f85c Jan 21 10:23:18 crc kubenswrapper[5119]: I0121 10:23:18.517284 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-srjmg" event={"ID":"29b45a36-1894-437c-aa94-b91f1008f40f","Type":"ContainerStarted","Data":"82b837a124c8cfebbac2d27dbb4993059f237687ffcea50c2fd547beba05b0d0"} Jan 21 10:23:18 crc kubenswrapper[5119]: I0121 10:23:18.520428 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"9656acf6-a085-424e-9bed-bfc45b74afc5","Type":"ContainerStarted","Data":"9f3c0a9446f8f75f24898c96f7c2c937848e6813f1ed40db98fd56cad156f85c"} Jan 21 10:23:18 crc kubenswrapper[5119]: I0121 10:23:18.591576 5119 scope.go:117] "RemoveContainer" containerID="b3efb26845161d27399ea77845cad2c1524875e9960a6c344d0d602df7e5951e" Jan 21 10:23:18 crc kubenswrapper[5119]: I0121 10:23:18.591759 5119 scope.go:117] "RemoveContainer" containerID="3abe897a5b1044b6c43f53de69a8a995c3b3071d86c20c3925f4fe049a303695" Jan 21 10:23:19 crc kubenswrapper[5119]: I0121 10:23:19.591189 5119 scope.go:117] "RemoveContainer" containerID="4205ebc442180a2f503135d89aae8045b919675dc55ab4cfb3ecf2e3bbc97577" Jan 21 10:23:19 crc kubenswrapper[5119]: I0121 10:23:19.919413 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:23:19 crc kubenswrapper[5119]: I0121 10:23:19.919541 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:23:20 crc kubenswrapper[5119]: I0121 10:23:20.549021 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k" event={"ID":"f2986212-b53b-4df9-9cd5-884f35c89cba","Type":"ContainerStarted","Data":"a7adedb4be71d6014261030327c64e4e25979af8ea151fec8a122820ff46855c"} Jan 21 10:23:20 crc kubenswrapper[5119]: I0121 10:23:20.552262 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm" event={"ID":"5ab338f9-d819-4ea4-9298-e9b521d0d494","Type":"ContainerStarted","Data":"11aaead199869ef04b7baf081bbe636a1c6965b479766dd4228a65f273d2f83f"} Jan 21 10:23:20 crc kubenswrapper[5119]: I0121 10:23:20.555064 5119 generic.go:358] "Generic (PLEG): container finished" podID="9656acf6-a085-424e-9bed-bfc45b74afc5" containerID="78404bce8ff0b39695343363268dcc1a1c664991f3cc87d19034e99dfb21e12d" exitCode=0 Jan 21 10:23:20 crc kubenswrapper[5119]: I0121 10:23:20.555184 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"9656acf6-a085-424e-9bed-bfc45b74afc5","Type":"ContainerDied","Data":"78404bce8ff0b39695343363268dcc1a1c664991f3cc87d19034e99dfb21e12d"} Jan 21 10:23:20 crc kubenswrapper[5119]: I0121 10:23:20.559241 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv" event={"ID":"2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7","Type":"ContainerStarted","Data":"233037c9d6c5c0e20aa3c64c88d81fd675e08f55bee64d4977d8464c4facc219"} Jan 21 10:23:20 crc kubenswrapper[5119]: I0121 10:23:20.623212 5119 scope.go:117] "RemoveContainer" containerID="d6be1d6d743895dfc4cf84d4adc3beb93423fc5799d0b4bffea18f9661e4c6f6" Jan 21 10:23:21 crc kubenswrapper[5119]: I0121 10:23:21.578300 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t" event={"ID":"e23122ba-6ad2-407e-aaeb-7c8f6e27ab54","Type":"ContainerStarted","Data":"aac077d0855a740550bb997af8d18c21beb4345b958096ec0daaf2ae83f6dd26"} Jan 21 10:23:24 crc kubenswrapper[5119]: I0121 10:23:24.465028 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 10:23:24 crc kubenswrapper[5119]: I0121 10:23:24.532508 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krh6d\" (UniqueName: \"kubernetes.io/projected/9656acf6-a085-424e-9bed-bfc45b74afc5-kube-api-access-krh6d\") pod \"9656acf6-a085-424e-9bed-bfc45b74afc5\" (UID: \"9656acf6-a085-424e-9bed-bfc45b74afc5\") " Jan 21 10:23:24 crc kubenswrapper[5119]: I0121 10:23:24.538838 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9656acf6-a085-424e-9bed-bfc45b74afc5-kube-api-access-krh6d" (OuterVolumeSpecName: "kube-api-access-krh6d") pod "9656acf6-a085-424e-9bed-bfc45b74afc5" (UID: "9656acf6-a085-424e-9bed-bfc45b74afc5"). InnerVolumeSpecName "kube-api-access-krh6d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:23:24 crc kubenswrapper[5119]: I0121 10:23:24.603806 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"9656acf6-a085-424e-9bed-bfc45b74afc5","Type":"ContainerDied","Data":"9f3c0a9446f8f75f24898c96f7c2c937848e6813f1ed40db98fd56cad156f85c"} Jan 21 10:23:24 crc kubenswrapper[5119]: I0121 10:23:24.603843 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f3c0a9446f8f75f24898c96f7c2c937848e6813f1ed40db98fd56cad156f85c" Jan 21 10:23:24 crc kubenswrapper[5119]: I0121 10:23:24.605547 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 10:23:24 crc kubenswrapper[5119]: I0121 10:23:24.630870 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_9656acf6-a085-424e-9bed-bfc45b74afc5/curl/0.log" Jan 21 10:23:24 crc kubenswrapper[5119]: I0121 10:23:24.633833 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-krh6d\" (UniqueName: \"kubernetes.io/projected/9656acf6-a085-424e-9bed-bfc45b74afc5-kube-api-access-krh6d\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:24 crc kubenswrapper[5119]: I0121 10:23:24.965268 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-hnfrz_ec93941b-ddbb-42d6-ae36-3c643b48a65b/prometheus-webhook-snmp/0.log" Jan 21 10:23:30 crc kubenswrapper[5119]: I0121 10:23:30.650423 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-srjmg" event={"ID":"29b45a36-1894-437c-aa94-b91f1008f40f","Type":"ContainerStarted","Data":"17fd522cf799506137483675a845b7bda0308e094e948d131afdf41956ee9b71"} Jan 21 10:23:48 crc kubenswrapper[5119]: I0121 10:23:48.788105 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-srjmg" event={"ID":"29b45a36-1894-437c-aa94-b91f1008f40f","Type":"ContainerStarted","Data":"c8edb206213d953e64922a6fc847c5b21c608a8791d285e65519be6fa444af11"} Jan 21 10:23:48 crc kubenswrapper[5119]: I0121 10:23:48.813456 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-srjmg" podStartSLOduration=2.034800124 podStartE2EDuration="32.813433878s" podCreationTimestamp="2026-01-21 10:23:16 +0000 UTC" firstStartedPulling="2026-01-21 10:23:17.777517575 +0000 UTC m=+1713.445609253" lastFinishedPulling="2026-01-21 10:23:48.556151329 +0000 UTC m=+1744.224243007" observedRunningTime="2026-01-21 10:23:48.807443675 +0000 UTC m=+1744.475535363" watchObservedRunningTime="2026-01-21 10:23:48.813433878 +0000 UTC m=+1744.481525556" Jan 21 10:23:49 crc kubenswrapper[5119]: I0121 10:23:49.918531 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:23:49 crc kubenswrapper[5119]: I0121 10:23:49.919012 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:23:55 crc kubenswrapper[5119]: I0121 10:23:55.165204 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-hnfrz_ec93941b-ddbb-42d6-ae36-3c643b48a65b/prometheus-webhook-snmp/0.log" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.131121 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483184-6h8tx"] Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.139107 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9656acf6-a085-424e-9bed-bfc45b74afc5" containerName="curl" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.139124 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="9656acf6-a085-424e-9bed-bfc45b74afc5" containerName="curl" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.139261 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="9656acf6-a085-424e-9bed-bfc45b74afc5" containerName="curl" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.143153 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483184-6h8tx"] Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.143263 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483184-6h8tx" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.146123 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.146310 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.146645 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.238760 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvnmv\" (UniqueName: \"kubernetes.io/projected/1aee3e16-2b3d-4a8f-92ea-639793f73b1f-kube-api-access-cvnmv\") pod \"auto-csr-approver-29483184-6h8tx\" (UID: \"1aee3e16-2b3d-4a8f-92ea-639793f73b1f\") " pod="openshift-infra/auto-csr-approver-29483184-6h8tx" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.340103 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cvnmv\" (UniqueName: \"kubernetes.io/projected/1aee3e16-2b3d-4a8f-92ea-639793f73b1f-kube-api-access-cvnmv\") pod \"auto-csr-approver-29483184-6h8tx\" (UID: \"1aee3e16-2b3d-4a8f-92ea-639793f73b1f\") " pod="openshift-infra/auto-csr-approver-29483184-6h8tx" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.364403 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvnmv\" (UniqueName: \"kubernetes.io/projected/1aee3e16-2b3d-4a8f-92ea-639793f73b1f-kube-api-access-cvnmv\") pod \"auto-csr-approver-29483184-6h8tx\" (UID: \"1aee3e16-2b3d-4a8f-92ea-639793f73b1f\") " pod="openshift-infra/auto-csr-approver-29483184-6h8tx" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.460960 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483184-6h8tx" Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.712943 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483184-6h8tx"] Jan 21 10:24:00 crc kubenswrapper[5119]: I0121 10:24:00.889045 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483184-6h8tx" event={"ID":"1aee3e16-2b3d-4a8f-92ea-639793f73b1f","Type":"ContainerStarted","Data":"4690f180f3f389069476034b8e97ea464dcc6a71879c9ea6312b5d68c48950e8"} Jan 21 10:24:02 crc kubenswrapper[5119]: I0121 10:24:02.903619 5119 generic.go:358] "Generic (PLEG): container finished" podID="1aee3e16-2b3d-4a8f-92ea-639793f73b1f" containerID="fe9e208109fff8950214b0c4015d37f36fafecb051869c105ddb4281ce62120a" exitCode=0 Jan 21 10:24:02 crc kubenswrapper[5119]: I0121 10:24:02.904304 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483184-6h8tx" event={"ID":"1aee3e16-2b3d-4a8f-92ea-639793f73b1f","Type":"ContainerDied","Data":"fe9e208109fff8950214b0c4015d37f36fafecb051869c105ddb4281ce62120a"} Jan 21 10:24:03 crc kubenswrapper[5119]: I0121 10:24:03.913699 5119 generic.go:358] "Generic (PLEG): container finished" podID="29b45a36-1894-437c-aa94-b91f1008f40f" containerID="17fd522cf799506137483675a845b7bda0308e094e948d131afdf41956ee9b71" exitCode=0 Jan 21 10:24:03 crc kubenswrapper[5119]: I0121 10:24:03.913836 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-srjmg" event={"ID":"29b45a36-1894-437c-aa94-b91f1008f40f","Type":"ContainerDied","Data":"17fd522cf799506137483675a845b7bda0308e094e948d131afdf41956ee9b71"} Jan 21 10:24:03 crc kubenswrapper[5119]: I0121 10:24:03.914958 5119 scope.go:117] "RemoveContainer" containerID="17fd522cf799506137483675a845b7bda0308e094e948d131afdf41956ee9b71" Jan 21 10:24:04 crc kubenswrapper[5119]: I0121 10:24:04.154206 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483184-6h8tx" Jan 21 10:24:04 crc kubenswrapper[5119]: I0121 10:24:04.198717 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvnmv\" (UniqueName: \"kubernetes.io/projected/1aee3e16-2b3d-4a8f-92ea-639793f73b1f-kube-api-access-cvnmv\") pod \"1aee3e16-2b3d-4a8f-92ea-639793f73b1f\" (UID: \"1aee3e16-2b3d-4a8f-92ea-639793f73b1f\") " Jan 21 10:24:04 crc kubenswrapper[5119]: I0121 10:24:04.205299 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aee3e16-2b3d-4a8f-92ea-639793f73b1f-kube-api-access-cvnmv" (OuterVolumeSpecName: "kube-api-access-cvnmv") pod "1aee3e16-2b3d-4a8f-92ea-639793f73b1f" (UID: "1aee3e16-2b3d-4a8f-92ea-639793f73b1f"). InnerVolumeSpecName "kube-api-access-cvnmv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:24:04 crc kubenswrapper[5119]: I0121 10:24:04.318239 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cvnmv\" (UniqueName: \"kubernetes.io/projected/1aee3e16-2b3d-4a8f-92ea-639793f73b1f-kube-api-access-cvnmv\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:04 crc kubenswrapper[5119]: I0121 10:24:04.923672 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483184-6h8tx" Jan 21 10:24:04 crc kubenswrapper[5119]: I0121 10:24:04.923778 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483184-6h8tx" event={"ID":"1aee3e16-2b3d-4a8f-92ea-639793f73b1f","Type":"ContainerDied","Data":"4690f180f3f389069476034b8e97ea464dcc6a71879c9ea6312b5d68c48950e8"} Jan 21 10:24:04 crc kubenswrapper[5119]: I0121 10:24:04.923819 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4690f180f3f389069476034b8e97ea464dcc6a71879c9ea6312b5d68c48950e8" Jan 21 10:24:05 crc kubenswrapper[5119]: I0121 10:24:05.208218 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483178-q5fxc"] Jan 21 10:24:05 crc kubenswrapper[5119]: I0121 10:24:05.216415 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483178-q5fxc"] Jan 21 10:24:06 crc kubenswrapper[5119]: I0121 10:24:06.604202 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e867fbb0-fbdc-4af4-b712-c6107d53366e" path="/var/lib/kubelet/pods/e867fbb0-fbdc-4af4-b712-c6107d53366e/volumes" Jan 21 10:24:19 crc kubenswrapper[5119]: I0121 10:24:19.918752 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:24:19 crc kubenswrapper[5119]: I0121 10:24:19.919444 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:24:19 crc kubenswrapper[5119]: I0121 10:24:19.919494 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:24:19 crc kubenswrapper[5119]: I0121 10:24:19.920323 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:24:19 crc kubenswrapper[5119]: I0121 10:24:19.920448 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" gracePeriod=600 Jan 21 10:24:20 crc kubenswrapper[5119]: E0121 10:24:20.044076 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:24:21 crc kubenswrapper[5119]: I0121 10:24:21.042521 5119 generic.go:358] "Generic (PLEG): container finished" podID="29b45a36-1894-437c-aa94-b91f1008f40f" containerID="c8edb206213d953e64922a6fc847c5b21c608a8791d285e65519be6fa444af11" exitCode=0 Jan 21 10:24:21 crc kubenswrapper[5119]: I0121 10:24:21.042616 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-srjmg" event={"ID":"29b45a36-1894-437c-aa94-b91f1008f40f","Type":"ContainerDied","Data":"c8edb206213d953e64922a6fc847c5b21c608a8791d285e65519be6fa444af11"} Jan 21 10:24:21 crc kubenswrapper[5119]: I0121 10:24:21.046793 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" exitCode=0 Jan 21 10:24:21 crc kubenswrapper[5119]: I0121 10:24:21.046960 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75"} Jan 21 10:24:21 crc kubenswrapper[5119]: I0121 10:24:21.047001 5119 scope.go:117] "RemoveContainer" containerID="b6ab9884520f29aaaa049d566e0f03918d804eb56ab65d3fab45a8a4f5ef9ba3" Jan 21 10:24:21 crc kubenswrapper[5119]: I0121 10:24:21.047974 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:24:21 crc kubenswrapper[5119]: E0121 10:24:21.048559 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.294077 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.388278 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-entrypoint-script\") pod \"29b45a36-1894-437c-aa94-b91f1008f40f\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.388428 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-entrypoint-script\") pod \"29b45a36-1894-437c-aa94-b91f1008f40f\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.388475 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-sensubility-config\") pod \"29b45a36-1894-437c-aa94-b91f1008f40f\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.388511 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-publisher\") pod \"29b45a36-1894-437c-aa94-b91f1008f40f\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.388531 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-config\") pod \"29b45a36-1894-437c-aa94-b91f1008f40f\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.388657 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-healthcheck-log\") pod \"29b45a36-1894-437c-aa94-b91f1008f40f\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.388705 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqqtt\" (UniqueName: \"kubernetes.io/projected/29b45a36-1894-437c-aa94-b91f1008f40f-kube-api-access-dqqtt\") pod \"29b45a36-1894-437c-aa94-b91f1008f40f\" (UID: \"29b45a36-1894-437c-aa94-b91f1008f40f\") " Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.405336 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "29b45a36-1894-437c-aa94-b91f1008f40f" (UID: "29b45a36-1894-437c-aa94-b91f1008f40f"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.407366 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "29b45a36-1894-437c-aa94-b91f1008f40f" (UID: "29b45a36-1894-437c-aa94-b91f1008f40f"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.407559 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "29b45a36-1894-437c-aa94-b91f1008f40f" (UID: "29b45a36-1894-437c-aa94-b91f1008f40f"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.410209 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "29b45a36-1894-437c-aa94-b91f1008f40f" (UID: "29b45a36-1894-437c-aa94-b91f1008f40f"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.410364 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b45a36-1894-437c-aa94-b91f1008f40f-kube-api-access-dqqtt" (OuterVolumeSpecName: "kube-api-access-dqqtt") pod "29b45a36-1894-437c-aa94-b91f1008f40f" (UID: "29b45a36-1894-437c-aa94-b91f1008f40f"). InnerVolumeSpecName "kube-api-access-dqqtt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.410670 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "29b45a36-1894-437c-aa94-b91f1008f40f" (UID: "29b45a36-1894-437c-aa94-b91f1008f40f"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.421865 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "29b45a36-1894-437c-aa94-b91f1008f40f" (UID: "29b45a36-1894-437c-aa94-b91f1008f40f"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.490681 5119 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.490717 5119 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.490728 5119 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.490736 5119 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.490745 5119 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.490753 5119 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/29b45a36-1894-437c-aa94-b91f1008f40f-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:22 crc kubenswrapper[5119]: I0121 10:24:22.490760 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dqqtt\" (UniqueName: \"kubernetes.io/projected/29b45a36-1894-437c-aa94-b91f1008f40f-kube-api-access-dqqtt\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:23 crc kubenswrapper[5119]: I0121 10:24:23.064656 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-srjmg" Jan 21 10:24:23 crc kubenswrapper[5119]: I0121 10:24:23.064656 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-srjmg" event={"ID":"29b45a36-1894-437c-aa94-b91f1008f40f","Type":"ContainerDied","Data":"82b837a124c8cfebbac2d27dbb4993059f237687ffcea50c2fd547beba05b0d0"} Jan 21 10:24:23 crc kubenswrapper[5119]: I0121 10:24:23.065099 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82b837a124c8cfebbac2d27dbb4993059f237687ffcea50c2fd547beba05b0d0" Jan 21 10:24:24 crc kubenswrapper[5119]: I0121 10:24:24.341861 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-srjmg_29b45a36-1894-437c-aa94-b91f1008f40f/smoketest-collectd/0.log" Jan 21 10:24:24 crc kubenswrapper[5119]: I0121 10:24:24.673680 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-srjmg_29b45a36-1894-437c-aa94-b91f1008f40f/smoketest-ceilometer/0.log" Jan 21 10:24:25 crc kubenswrapper[5119]: I0121 10:24:25.000336 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-hzlwg_0661936f-76da-4b08-818a-352bba8bad5c/default-interconnect/0.log" Jan 21 10:24:25 crc kubenswrapper[5119]: I0121 10:24:25.320537 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k_f2986212-b53b-4df9-9cd5-884f35c89cba/bridge/2.log" Jan 21 10:24:25 crc kubenswrapper[5119]: I0121 10:24:25.607471 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k_f2986212-b53b-4df9-9cd5-884f35c89cba/sg-core/0.log" Jan 21 10:24:25 crc kubenswrapper[5119]: I0121 10:24:25.894119 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm_5ab338f9-d819-4ea4-9298-e9b521d0d494/bridge/2.log" Jan 21 10:24:26 crc kubenswrapper[5119]: I0121 10:24:26.162262 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm_5ab338f9-d819-4ea4-9298-e9b521d0d494/sg-core/0.log" Jan 21 10:24:26 crc kubenswrapper[5119]: I0121 10:24:26.499849 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t_e23122ba-6ad2-407e-aaeb-7c8f6e27ab54/bridge/2.log" Jan 21 10:24:26 crc kubenswrapper[5119]: I0121 10:24:26.793902 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t_e23122ba-6ad2-407e-aaeb-7c8f6e27ab54/sg-core/0.log" Jan 21 10:24:27 crc kubenswrapper[5119]: I0121 10:24:27.061595 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z_89079dd8-483b-44ef-81be-6ab712709669/bridge/2.log" Jan 21 10:24:27 crc kubenswrapper[5119]: I0121 10:24:27.711668 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z_89079dd8-483b-44ef-81be-6ab712709669/sg-core/0.log" Jan 21 10:24:28 crc kubenswrapper[5119]: I0121 10:24:28.494474 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv_2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7/bridge/2.log" Jan 21 10:24:28 crc kubenswrapper[5119]: I0121 10:24:28.744716 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv_2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7/sg-core/0.log" Jan 21 10:24:31 crc kubenswrapper[5119]: I0121 10:24:31.891912 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-57588ddc85-czgbb_c7cc2173-29c2-4a8e-ab2b-7e373d79c484/operator/0.log" Jan 21 10:24:32 crc kubenswrapper[5119]: I0121 10:24:32.193202 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_481848f8-834a-47be-9301-1153fcbc51ef/prometheus/0.log" Jan 21 10:24:32 crc kubenswrapper[5119]: I0121 10:24:32.503554 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12216d5-43c7-4e0c-be7a-74aa76900a78/elasticsearch/0.log" Jan 21 10:24:32 crc kubenswrapper[5119]: I0121 10:24:32.788156 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-hnfrz_ec93941b-ddbb-42d6-ae36-3c643b48a65b/prometheus-webhook-snmp/0.log" Jan 21 10:24:33 crc kubenswrapper[5119]: I0121 10:24:33.073855 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_d0d6eb61-b1b6-4df6-a282-2f98000680b0/alertmanager/0.log" Jan 21 10:24:33 crc kubenswrapper[5119]: I0121 10:24:33.590970 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:24:33 crc kubenswrapper[5119]: E0121 10:24:33.591327 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:24:45 crc kubenswrapper[5119]: I0121 10:24:45.591104 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:24:45 crc kubenswrapper[5119]: E0121 10:24:45.592099 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:24:46 crc kubenswrapper[5119]: I0121 10:24:46.143336 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:24:46 crc kubenswrapper[5119]: I0121 10:24:46.146473 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:24:46 crc kubenswrapper[5119]: I0121 10:24:46.154155 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:24:46 crc kubenswrapper[5119]: I0121 10:24:46.157736 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:24:48 crc kubenswrapper[5119]: I0121 10:24:48.461217 5119 scope.go:117] "RemoveContainer" containerID="baf092aef4f8e068011115e07621e0fd54e5919a6ac3aeb96a12dc2c5341ec81" Jan 21 10:24:48 crc kubenswrapper[5119]: I0121 10:24:48.528397 5119 scope.go:117] "RemoveContainer" containerID="7de6ec9856e965b8204cf44dbfd193c0e5bc9c0937f19e2a722017a7e6b76816" Jan 21 10:24:48 crc kubenswrapper[5119]: I0121 10:24:48.602932 5119 scope.go:117] "RemoveContainer" containerID="66b5b63cf4fa59d9b8d013c8cda68a8ee0daeffca56cdd2b879198cfb2c9c8c9" Jan 21 10:24:48 crc kubenswrapper[5119]: I0121 10:24:48.671246 5119 scope.go:117] "RemoveContainer" containerID="5400b7852c35fffafcc91609a8c95507ff70393dbd7bd92b0682bb8bb9daf859" Jan 21 10:24:48 crc kubenswrapper[5119]: I0121 10:24:48.739173 5119 scope.go:117] "RemoveContainer" containerID="99f70033747c3cbd096e7e93ad7954653e317507483059290e4b8f9421f8a1c8" Jan 21 10:24:50 crc kubenswrapper[5119]: I0121 10:24:50.004863 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-84d7cb46fc-c4c9r_7ede4c81-63e3-44ba-8e96-9dcb8c34adce/operator/0.log" Jan 21 10:24:53 crc kubenswrapper[5119]: I0121 10:24:53.568080 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-57588ddc85-czgbb_c7cc2173-29c2-4a8e-ab2b-7e373d79c484/operator/0.log" Jan 21 10:24:53 crc kubenswrapper[5119]: I0121 10:24:53.881898 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_37e04609-ba47-4c81-bd8f-40f2342d42d5/qdr/0.log" Jan 21 10:25:00 crc kubenswrapper[5119]: I0121 10:25:00.599825 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:25:00 crc kubenswrapper[5119]: E0121 10:25:00.600678 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:25:11 crc kubenswrapper[5119]: I0121 10:25:11.591744 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:25:11 crc kubenswrapper[5119]: E0121 10:25:11.593112 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.515139 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wqqrr/must-gather-r6f75"] Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.516750 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29b45a36-1894-437c-aa94-b91f1008f40f" containerName="smoketest-collectd" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.516770 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b45a36-1894-437c-aa94-b91f1008f40f" containerName="smoketest-collectd" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.516782 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29b45a36-1894-437c-aa94-b91f1008f40f" containerName="smoketest-ceilometer" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.516789 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b45a36-1894-437c-aa94-b91f1008f40f" containerName="smoketest-ceilometer" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.516812 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1aee3e16-2b3d-4a8f-92ea-639793f73b1f" containerName="oc" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.516819 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aee3e16-2b3d-4a8f-92ea-639793f73b1f" containerName="oc" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.516987 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="29b45a36-1894-437c-aa94-b91f1008f40f" containerName="smoketest-ceilometer" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.517003 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="1aee3e16-2b3d-4a8f-92ea-639793f73b1f" containerName="oc" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.517022 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="29b45a36-1894-437c-aa94-b91f1008f40f" containerName="smoketest-collectd" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.521231 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wqqrr/must-gather-r6f75" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.523182 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wqqrr/must-gather-r6f75"] Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.524835 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wqqrr\"/\"kube-root-ca.crt\"" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.525053 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wqqrr\"/\"openshift-service-ca.crt\"" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.597346 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0ecda810-3606-443d-b45b-fed999c5ee87-must-gather-output\") pod \"must-gather-r6f75\" (UID: \"0ecda810-3606-443d-b45b-fed999c5ee87\") " pod="openshift-must-gather-wqqrr/must-gather-r6f75" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.597528 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9hrp\" (UniqueName: \"kubernetes.io/projected/0ecda810-3606-443d-b45b-fed999c5ee87-kube-api-access-l9hrp\") pod \"must-gather-r6f75\" (UID: \"0ecda810-3606-443d-b45b-fed999c5ee87\") " pod="openshift-must-gather-wqqrr/must-gather-r6f75" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.699595 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l9hrp\" (UniqueName: \"kubernetes.io/projected/0ecda810-3606-443d-b45b-fed999c5ee87-kube-api-access-l9hrp\") pod \"must-gather-r6f75\" (UID: \"0ecda810-3606-443d-b45b-fed999c5ee87\") " pod="openshift-must-gather-wqqrr/must-gather-r6f75" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.699737 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0ecda810-3606-443d-b45b-fed999c5ee87-must-gather-output\") pod \"must-gather-r6f75\" (UID: \"0ecda810-3606-443d-b45b-fed999c5ee87\") " pod="openshift-must-gather-wqqrr/must-gather-r6f75" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.700464 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0ecda810-3606-443d-b45b-fed999c5ee87-must-gather-output\") pod \"must-gather-r6f75\" (UID: \"0ecda810-3606-443d-b45b-fed999c5ee87\") " pod="openshift-must-gather-wqqrr/must-gather-r6f75" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.728036 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9hrp\" (UniqueName: \"kubernetes.io/projected/0ecda810-3606-443d-b45b-fed999c5ee87-kube-api-access-l9hrp\") pod \"must-gather-r6f75\" (UID: \"0ecda810-3606-443d-b45b-fed999c5ee87\") " pod="openshift-must-gather-wqqrr/must-gather-r6f75" Jan 21 10:25:19 crc kubenswrapper[5119]: I0121 10:25:19.837054 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wqqrr/must-gather-r6f75" Jan 21 10:25:20 crc kubenswrapper[5119]: I0121 10:25:20.287275 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:25:20 crc kubenswrapper[5119]: I0121 10:25:20.293626 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wqqrr/must-gather-r6f75"] Jan 21 10:25:20 crc kubenswrapper[5119]: I0121 10:25:20.923796 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wqqrr/must-gather-r6f75" event={"ID":"0ecda810-3606-443d-b45b-fed999c5ee87","Type":"ContainerStarted","Data":"7c0a6352fc959adaf00c0083dc4efe803ffca37295439174acf572be39104a05"} Jan 21 10:25:22 crc kubenswrapper[5119]: I0121 10:25:22.591366 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:25:22 crc kubenswrapper[5119]: E0121 10:25:22.591688 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.094027 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-jvgdj"] Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.147246 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jvgdj"] Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.147386 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jvgdj" Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.203341 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpfww\" (UniqueName: \"kubernetes.io/projected/a8a0125b-76ab-4447-ab95-378759ee9e99-kube-api-access-xpfww\") pod \"infrawatch-operators-jvgdj\" (UID: \"a8a0125b-76ab-4447-ab95-378759ee9e99\") " pod="service-telemetry/infrawatch-operators-jvgdj" Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.304794 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xpfww\" (UniqueName: \"kubernetes.io/projected/a8a0125b-76ab-4447-ab95-378759ee9e99-kube-api-access-xpfww\") pod \"infrawatch-operators-jvgdj\" (UID: \"a8a0125b-76ab-4447-ab95-378759ee9e99\") " pod="service-telemetry/infrawatch-operators-jvgdj" Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.324205 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpfww\" (UniqueName: \"kubernetes.io/projected/a8a0125b-76ab-4447-ab95-378759ee9e99-kube-api-access-xpfww\") pod \"infrawatch-operators-jvgdj\" (UID: \"a8a0125b-76ab-4447-ab95-378759ee9e99\") " pod="service-telemetry/infrawatch-operators-jvgdj" Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.476507 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jvgdj" Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.681932 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jvgdj"] Jan 21 10:25:27 crc kubenswrapper[5119]: W0121 10:25:27.684539 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8a0125b_76ab_4447_ab95_378759ee9e99.slice/crio-5fa5a1470543a43b5943785147111984cabfb152931836bbd3b7dbaede99cb9e WatchSource:0}: Error finding container 5fa5a1470543a43b5943785147111984cabfb152931836bbd3b7dbaede99cb9e: Status 404 returned error can't find the container with id 5fa5a1470543a43b5943785147111984cabfb152931836bbd3b7dbaede99cb9e Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.980496 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wqqrr/must-gather-r6f75" event={"ID":"0ecda810-3606-443d-b45b-fed999c5ee87","Type":"ContainerStarted","Data":"97c901f2b36c95af7d0237bb7bc1531883f8f4d56f2cfe9dd0cf31f9ab0e5d63"} Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.980923 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wqqrr/must-gather-r6f75" event={"ID":"0ecda810-3606-443d-b45b-fed999c5ee87","Type":"ContainerStarted","Data":"57493a324e8bebd0cd1c38bd26a8cce59e891b2c8d8833b7b66820cc7333c50f"} Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.982011 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jvgdj" event={"ID":"a8a0125b-76ab-4447-ab95-378759ee9e99","Type":"ContainerStarted","Data":"9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4"} Jan 21 10:25:27 crc kubenswrapper[5119]: I0121 10:25:27.982173 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jvgdj" event={"ID":"a8a0125b-76ab-4447-ab95-378759ee9e99","Type":"ContainerStarted","Data":"5fa5a1470543a43b5943785147111984cabfb152931836bbd3b7dbaede99cb9e"} Jan 21 10:25:28 crc kubenswrapper[5119]: I0121 10:25:28.004975 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wqqrr/must-gather-r6f75" podStartSLOduration=2.411881202 podStartE2EDuration="9.004957481s" podCreationTimestamp="2026-01-21 10:25:19 +0000 UTC" firstStartedPulling="2026-01-21 10:25:20.287727995 +0000 UTC m=+1835.955819673" lastFinishedPulling="2026-01-21 10:25:26.880804274 +0000 UTC m=+1842.548895952" observedRunningTime="2026-01-21 10:25:27.994327673 +0000 UTC m=+1843.662419351" watchObservedRunningTime="2026-01-21 10:25:28.004957481 +0000 UTC m=+1843.673049169" Jan 21 10:25:28 crc kubenswrapper[5119]: I0121 10:25:28.011817 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-jvgdj" podStartSLOduration=0.912878573 podStartE2EDuration="1.011800836s" podCreationTimestamp="2026-01-21 10:25:27 +0000 UTC" firstStartedPulling="2026-01-21 10:25:27.685920796 +0000 UTC m=+1843.354012474" lastFinishedPulling="2026-01-21 10:25:27.784843059 +0000 UTC m=+1843.452934737" observedRunningTime="2026-01-21 10:25:28.006717838 +0000 UTC m=+1843.674809516" watchObservedRunningTime="2026-01-21 10:25:28.011800836 +0000 UTC m=+1843.679892514" Jan 21 10:25:34 crc kubenswrapper[5119]: I0121 10:25:34.596175 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:25:34 crc kubenswrapper[5119]: E0121 10:25:34.597071 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:25:37 crc kubenswrapper[5119]: I0121 10:25:37.477393 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-jvgdj" Jan 21 10:25:37 crc kubenswrapper[5119]: I0121 10:25:37.478849 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-jvgdj" Jan 21 10:25:37 crc kubenswrapper[5119]: I0121 10:25:37.512812 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-jvgdj" Jan 21 10:25:38 crc kubenswrapper[5119]: I0121 10:25:38.099208 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-jvgdj" Jan 21 10:25:38 crc kubenswrapper[5119]: I0121 10:25:38.135287 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jvgdj"] Jan 21 10:25:39 crc kubenswrapper[5119]: I0121 10:25:39.634496 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-dc2k6_cec067a0-6e27-4e3f-b03a-f37ffd10dd43/control-plane-machine-set-operator/0.log" Jan 21 10:25:39 crc kubenswrapper[5119]: I0121 10:25:39.653136 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-6kqm2_1ccf6a04-2820-4b99-9dbd-2e6d111b4fed/kube-rbac-proxy/0.log" Jan 21 10:25:39 crc kubenswrapper[5119]: I0121 10:25:39.663078 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-6kqm2_1ccf6a04-2820-4b99-9dbd-2e6d111b4fed/machine-api-operator/0.log" Jan 21 10:25:40 crc kubenswrapper[5119]: I0121 10:25:40.078507 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-jvgdj" podUID="a8a0125b-76ab-4447-ab95-378759ee9e99" containerName="registry-server" containerID="cri-o://9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4" gracePeriod=2 Jan 21 10:25:40 crc kubenswrapper[5119]: I0121 10:25:40.431098 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jvgdj" Jan 21 10:25:40 crc kubenswrapper[5119]: I0121 10:25:40.494907 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpfww\" (UniqueName: \"kubernetes.io/projected/a8a0125b-76ab-4447-ab95-378759ee9e99-kube-api-access-xpfww\") pod \"a8a0125b-76ab-4447-ab95-378759ee9e99\" (UID: \"a8a0125b-76ab-4447-ab95-378759ee9e99\") " Jan 21 10:25:40 crc kubenswrapper[5119]: I0121 10:25:40.501798 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a0125b-76ab-4447-ab95-378759ee9e99-kube-api-access-xpfww" (OuterVolumeSpecName: "kube-api-access-xpfww") pod "a8a0125b-76ab-4447-ab95-378759ee9e99" (UID: "a8a0125b-76ab-4447-ab95-378759ee9e99"). InnerVolumeSpecName "kube-api-access-xpfww". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:25:40 crc kubenswrapper[5119]: I0121 10:25:40.597320 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xpfww\" (UniqueName: \"kubernetes.io/projected/a8a0125b-76ab-4447-ab95-378759ee9e99-kube-api-access-xpfww\") on node \"crc\" DevicePath \"\"" Jan 21 10:25:41 crc kubenswrapper[5119]: I0121 10:25:41.086968 5119 generic.go:358] "Generic (PLEG): container finished" podID="a8a0125b-76ab-4447-ab95-378759ee9e99" containerID="9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4" exitCode=0 Jan 21 10:25:41 crc kubenswrapper[5119]: I0121 10:25:41.087075 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jvgdj" event={"ID":"a8a0125b-76ab-4447-ab95-378759ee9e99","Type":"ContainerDied","Data":"9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4"} Jan 21 10:25:41 crc kubenswrapper[5119]: I0121 10:25:41.087742 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jvgdj" event={"ID":"a8a0125b-76ab-4447-ab95-378759ee9e99","Type":"ContainerDied","Data":"5fa5a1470543a43b5943785147111984cabfb152931836bbd3b7dbaede99cb9e"} Jan 21 10:25:41 crc kubenswrapper[5119]: I0121 10:25:41.087764 5119 scope.go:117] "RemoveContainer" containerID="9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4" Jan 21 10:25:41 crc kubenswrapper[5119]: I0121 10:25:41.087121 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jvgdj" Jan 21 10:25:41 crc kubenswrapper[5119]: I0121 10:25:41.111590 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jvgdj"] Jan 21 10:25:41 crc kubenswrapper[5119]: I0121 10:25:41.115403 5119 scope.go:117] "RemoveContainer" containerID="9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4" Jan 21 10:25:41 crc kubenswrapper[5119]: E0121 10:25:41.116090 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4\": container with ID starting with 9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4 not found: ID does not exist" containerID="9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4" Jan 21 10:25:41 crc kubenswrapper[5119]: I0121 10:25:41.116131 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4"} err="failed to get container status \"9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4\": rpc error: code = NotFound desc = could not find container \"9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4\": container with ID starting with 9692121b91603c13480ab7a2cbdc3fed1b7b4431dbe50c6591247695873235f4 not found: ID does not exist" Jan 21 10:25:41 crc kubenswrapper[5119]: I0121 10:25:41.118308 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-jvgdj"] Jan 21 10:25:42 crc kubenswrapper[5119]: I0121 10:25:42.598977 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8a0125b-76ab-4447-ab95-378759ee9e99" path="/var/lib/kubelet/pods/a8a0125b-76ab-4447-ab95-378759ee9e99/volumes" Jan 21 10:25:44 crc kubenswrapper[5119]: I0121 10:25:44.266901 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-87w9p_e419bfea-ad6b-452c-a894-952a01ea8429/cert-manager-controller/0.log" Jan 21 10:25:44 crc kubenswrapper[5119]: I0121 10:25:44.281572 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-vv5mq_d9416295-81e6-488c-b079-97d7ba7c4f3e/cert-manager-cainjector/0.log" Jan 21 10:25:44 crc kubenswrapper[5119]: I0121 10:25:44.297143 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-ps4d8_9ab82c58-d623-4b22-aae4-4f8c744cb42d/cert-manager-webhook/0.log" Jan 21 10:25:48 crc kubenswrapper[5119]: I0121 10:25:48.893988 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-tbcj4_7c6b806a-236c-47d9-bd21-32fcaee5b1ec/prometheus-operator/0.log" Jan 21 10:25:48 crc kubenswrapper[5119]: I0121 10:25:48.904846 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-69b968d655-7rzqs_14cb3c42-9784-4142-a581-86863911936b/prometheus-operator-admission-webhook/0.log" Jan 21 10:25:48 crc kubenswrapper[5119]: I0121 10:25:48.917142 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-69b968d655-n75gs_ee8142e1-cc8a-44c9-b122-940344748596/prometheus-operator-admission-webhook/0.log" Jan 21 10:25:48 crc kubenswrapper[5119]: I0121 10:25:48.943054 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-gdq46_feb935e3-7103-46f7-ab84-2ba969146f6f/operator/0.log" Jan 21 10:25:48 crc kubenswrapper[5119]: I0121 10:25:48.954357 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-6kpd9_e0115c16-2bbf-4c9a-8731-b4f799070b87/perses-operator/0.log" Jan 21 10:25:49 crc kubenswrapper[5119]: I0121 10:25:49.591384 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:25:49 crc kubenswrapper[5119]: E0121 10:25:49.592951 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.771494 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7_96725402-3741-49bb-a915-6e04fde9ee9d/extract/0.log" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.778543 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7_96725402-3741-49bb-a915-6e04fde9ee9d/util/0.log" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.810506 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54qj7_96725402-3741-49bb-a915-6e04fde9ee9d/pull/0.log" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.821905 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb_8e328aac-fdc5-4809-9e67-d4e3cbe46404/extract/0.log" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.832495 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb_8e328aac-fdc5-4809-9e67-d4e3cbe46404/util/0.log" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.842244 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2mtb_8e328aac-fdc5-4809-9e67-d4e3cbe46404/pull/0.log" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.856707 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx_73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4/extract/0.log" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.864479 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx_73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4/util/0.log" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.873966 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edq7tx_73ffe3a4-c2e9-40b2-b57c-01584b3d1ec4/pull/0.log" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.885174 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh_ffb326e8-8174-4779-9192-7321b0edcb79/extract/0.log" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.894454 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh_ffb326e8-8174-4779-9192-7321b0edcb79/util/0.log" Jan 21 10:25:53 crc kubenswrapper[5119]: I0121 10:25:53.905565 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dqcqh_ffb326e8-8174-4779-9192-7321b0edcb79/pull/0.log" Jan 21 10:25:54 crc kubenswrapper[5119]: I0121 10:25:54.134340 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dcd6_8e00889e-5f62-4c41-971c-f9ef4ed0d77e/registry-server/0.log" Jan 21 10:25:54 crc kubenswrapper[5119]: I0121 10:25:54.140752 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dcd6_8e00889e-5f62-4c41-971c-f9ef4ed0d77e/extract-utilities/0.log" Jan 21 10:25:54 crc kubenswrapper[5119]: I0121 10:25:54.147074 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dcd6_8e00889e-5f62-4c41-971c-f9ef4ed0d77e/extract-content/0.log" Jan 21 10:25:54 crc kubenswrapper[5119]: I0121 10:25:54.630521 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqs4l_1563cf60-a66c-484e-bc5d-6dd7571d55a6/registry-server/0.log" Jan 21 10:25:54 crc kubenswrapper[5119]: I0121 10:25:54.636677 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqs4l_1563cf60-a66c-484e-bc5d-6dd7571d55a6/extract-utilities/0.log" Jan 21 10:25:54 crc kubenswrapper[5119]: I0121 10:25:54.643930 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqs4l_1563cf60-a66c-484e-bc5d-6dd7571d55a6/extract-content/0.log" Jan 21 10:25:54 crc kubenswrapper[5119]: I0121 10:25:54.658696 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-k595m_b28083dd-7140-4978-9f2e-492904f94465/marketplace-operator/0.log" Jan 21 10:25:54 crc kubenswrapper[5119]: I0121 10:25:54.991706 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pc5lh_eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321/registry-server/0.log" Jan 21 10:25:54 crc kubenswrapper[5119]: I0121 10:25:54.996291 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pc5lh_eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321/extract-utilities/0.log" Jan 21 10:25:55 crc kubenswrapper[5119]: I0121 10:25:55.002766 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pc5lh_eb15b3ad-46ff-46ed-bd2b-b15ac9c4a321/extract-content/0.log" Jan 21 10:25:58 crc kubenswrapper[5119]: I0121 10:25:58.380801 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-tbcj4_7c6b806a-236c-47d9-bd21-32fcaee5b1ec/prometheus-operator/0.log" Jan 21 10:25:58 crc kubenswrapper[5119]: I0121 10:25:58.393123 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-69b968d655-7rzqs_14cb3c42-9784-4142-a581-86863911936b/prometheus-operator-admission-webhook/0.log" Jan 21 10:25:58 crc kubenswrapper[5119]: I0121 10:25:58.405623 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-69b968d655-n75gs_ee8142e1-cc8a-44c9-b122-940344748596/prometheus-operator-admission-webhook/0.log" Jan 21 10:25:58 crc kubenswrapper[5119]: I0121 10:25:58.419862 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-gdq46_feb935e3-7103-46f7-ab84-2ba969146f6f/operator/0.log" Jan 21 10:25:58 crc kubenswrapper[5119]: I0121 10:25:58.433027 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-6kpd9_e0115c16-2bbf-4c9a-8731-b4f799070b87/perses-operator/0.log" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.131635 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483186-7wqzw"] Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.132324 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a8a0125b-76ab-4447-ab95-378759ee9e99" containerName="registry-server" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.132336 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a0125b-76ab-4447-ab95-378759ee9e99" containerName="registry-server" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.132530 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="a8a0125b-76ab-4447-ab95-378759ee9e99" containerName="registry-server" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.166292 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483186-7wqzw"] Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.166446 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483186-7wqzw" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.169129 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.169328 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.169521 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.260487 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpfr\" (UniqueName: \"kubernetes.io/projected/c8b6af3d-b6ee-4fc0-94f2-17b1121b15af-kube-api-access-pmpfr\") pod \"auto-csr-approver-29483186-7wqzw\" (UID: \"c8b6af3d-b6ee-4fc0-94f2-17b1121b15af\") " pod="openshift-infra/auto-csr-approver-29483186-7wqzw" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.361507 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pmpfr\" (UniqueName: \"kubernetes.io/projected/c8b6af3d-b6ee-4fc0-94f2-17b1121b15af-kube-api-access-pmpfr\") pod \"auto-csr-approver-29483186-7wqzw\" (UID: \"c8b6af3d-b6ee-4fc0-94f2-17b1121b15af\") " pod="openshift-infra/auto-csr-approver-29483186-7wqzw" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.383250 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmpfr\" (UniqueName: \"kubernetes.io/projected/c8b6af3d-b6ee-4fc0-94f2-17b1121b15af-kube-api-access-pmpfr\") pod \"auto-csr-approver-29483186-7wqzw\" (UID: \"c8b6af3d-b6ee-4fc0-94f2-17b1121b15af\") " pod="openshift-infra/auto-csr-approver-29483186-7wqzw" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.483139 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483186-7wqzw" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.591353 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:26:00 crc kubenswrapper[5119]: E0121 10:26:00.591688 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:26:00 crc kubenswrapper[5119]: I0121 10:26:00.881502 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483186-7wqzw"] Jan 21 10:26:01 crc kubenswrapper[5119]: I0121 10:26:01.229880 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483186-7wqzw" event={"ID":"c8b6af3d-b6ee-4fc0-94f2-17b1121b15af","Type":"ContainerStarted","Data":"6a968db1b1cd9d3edf69fddb231ec4af47001cf4e87616b098fdbb79e26a1b20"} Jan 21 10:26:02 crc kubenswrapper[5119]: I0121 10:26:02.239891 5119 generic.go:358] "Generic (PLEG): container finished" podID="c8b6af3d-b6ee-4fc0-94f2-17b1121b15af" containerID="1e0556cfab979244bf8df2f376a0e756047997eb36855b57a357e7c64d492889" exitCode=0 Jan 21 10:26:02 crc kubenswrapper[5119]: I0121 10:26:02.240036 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483186-7wqzw" event={"ID":"c8b6af3d-b6ee-4fc0-94f2-17b1121b15af","Type":"ContainerDied","Data":"1e0556cfab979244bf8df2f376a0e756047997eb36855b57a357e7c64d492889"} Jan 21 10:26:03 crc kubenswrapper[5119]: I0121 10:26:03.512358 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483186-7wqzw" Jan 21 10:26:03 crc kubenswrapper[5119]: I0121 10:26:03.607460 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmpfr\" (UniqueName: \"kubernetes.io/projected/c8b6af3d-b6ee-4fc0-94f2-17b1121b15af-kube-api-access-pmpfr\") pod \"c8b6af3d-b6ee-4fc0-94f2-17b1121b15af\" (UID: \"c8b6af3d-b6ee-4fc0-94f2-17b1121b15af\") " Jan 21 10:26:03 crc kubenswrapper[5119]: I0121 10:26:03.613228 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b6af3d-b6ee-4fc0-94f2-17b1121b15af-kube-api-access-pmpfr" (OuterVolumeSpecName: "kube-api-access-pmpfr") pod "c8b6af3d-b6ee-4fc0-94f2-17b1121b15af" (UID: "c8b6af3d-b6ee-4fc0-94f2-17b1121b15af"). InnerVolumeSpecName "kube-api-access-pmpfr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:26:03 crc kubenswrapper[5119]: I0121 10:26:03.708848 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pmpfr\" (UniqueName: \"kubernetes.io/projected/c8b6af3d-b6ee-4fc0-94f2-17b1121b15af-kube-api-access-pmpfr\") on node \"crc\" DevicePath \"\"" Jan 21 10:26:04 crc kubenswrapper[5119]: I0121 10:26:04.256831 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483186-7wqzw" event={"ID":"c8b6af3d-b6ee-4fc0-94f2-17b1121b15af","Type":"ContainerDied","Data":"6a968db1b1cd9d3edf69fddb231ec4af47001cf4e87616b098fdbb79e26a1b20"} Jan 21 10:26:04 crc kubenswrapper[5119]: I0121 10:26:04.256875 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a968db1b1cd9d3edf69fddb231ec4af47001cf4e87616b098fdbb79e26a1b20" Jan 21 10:26:04 crc kubenswrapper[5119]: I0121 10:26:04.256938 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483186-7wqzw" Jan 21 10:26:04 crc kubenswrapper[5119]: I0121 10:26:04.567443 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483180-ctgp7"] Jan 21 10:26:04 crc kubenswrapper[5119]: I0121 10:26:04.572560 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483180-ctgp7"] Jan 21 10:26:04 crc kubenswrapper[5119]: I0121 10:26:04.599981 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47bd5ea6-6b18-4142-ba75-c66720a8059e" path="/var/lib/kubelet/pods/47bd5ea6-6b18-4142-ba75-c66720a8059e/volumes" Jan 21 10:26:06 crc kubenswrapper[5119]: I0121 10:26:06.547141 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-tbcj4_7c6b806a-236c-47d9-bd21-32fcaee5b1ec/prometheus-operator/0.log" Jan 21 10:26:06 crc kubenswrapper[5119]: I0121 10:26:06.557526 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-69b968d655-7rzqs_14cb3c42-9784-4142-a581-86863911936b/prometheus-operator-admission-webhook/0.log" Jan 21 10:26:06 crc kubenswrapper[5119]: I0121 10:26:06.568822 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-69b968d655-n75gs_ee8142e1-cc8a-44c9-b122-940344748596/prometheus-operator-admission-webhook/0.log" Jan 21 10:26:06 crc kubenswrapper[5119]: I0121 10:26:06.617466 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-gdq46_feb935e3-7103-46f7-ab84-2ba969146f6f/operator/0.log" Jan 21 10:26:06 crc kubenswrapper[5119]: I0121 10:26:06.638344 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-6kpd9_e0115c16-2bbf-4c9a-8731-b4f799070b87/perses-operator/0.log" Jan 21 10:26:06 crc kubenswrapper[5119]: I0121 10:26:06.721521 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-87w9p_e419bfea-ad6b-452c-a894-952a01ea8429/cert-manager-controller/0.log" Jan 21 10:26:06 crc kubenswrapper[5119]: I0121 10:26:06.732105 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-vv5mq_d9416295-81e6-488c-b079-97d7ba7c4f3e/cert-manager-cainjector/0.log" Jan 21 10:26:06 crc kubenswrapper[5119]: I0121 10:26:06.747667 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-ps4d8_9ab82c58-d623-4b22-aae4-4f8c744cb42d/cert-manager-webhook/0.log" Jan 21 10:26:07 crc kubenswrapper[5119]: I0121 10:26:07.251553 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-87w9p_e419bfea-ad6b-452c-a894-952a01ea8429/cert-manager-controller/0.log" Jan 21 10:26:07 crc kubenswrapper[5119]: I0121 10:26:07.262001 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-vv5mq_d9416295-81e6-488c-b079-97d7ba7c4f3e/cert-manager-cainjector/0.log" Jan 21 10:26:07 crc kubenswrapper[5119]: I0121 10:26:07.271223 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-ps4d8_9ab82c58-d623-4b22-aae4-4f8c744cb42d/cert-manager-webhook/0.log" Jan 21 10:26:07 crc kubenswrapper[5119]: I0121 10:26:07.689399 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-dc2k6_cec067a0-6e27-4e3f-b03a-f37ffd10dd43/control-plane-machine-set-operator/0.log" Jan 21 10:26:07 crc kubenswrapper[5119]: I0121 10:26:07.706874 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-6kqm2_1ccf6a04-2820-4b99-9dbd-2e6d111b4fed/kube-rbac-proxy/0.log" Jan 21 10:26:07 crc kubenswrapper[5119]: I0121 10:26:07.716363 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-6kqm2_1ccf6a04-2820-4b99-9dbd-2e6d111b4fed/machine-api-operator/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.239044 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb_8549968f-d5b0-4ce5-beec-50d16fc6cf3e/extract/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.246562 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb_8549968f-d5b0-4ce5-beec-50d16fc6cf3e/util/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.254544 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09zbwsb_8549968f-d5b0-4ce5-beec-50d16fc6cf3e/pull/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.268742 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7_c3c48b5c-d02d-406d-8893-4b4e73df93b5/extract/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.276847 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7_c3c48b5c-d02d-406d-8893-4b4e73df93b5/util/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.285322 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65ad95t7_c3c48b5c-d02d-406d-8893-4b4e73df93b5/pull/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.296999 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_d0d6eb61-b1b6-4df6-a282-2f98000680b0/alertmanager/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.304173 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_d0d6eb61-b1b6-4df6-a282-2f98000680b0/config-reloader/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.310299 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_d0d6eb61-b1b6-4df6-a282-2f98000680b0/oauth-proxy/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.316230 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_d0d6eb61-b1b6-4df6-a282-2f98000680b0/init-config-reloader/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.335053 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_9656acf6-a085-424e-9bed-bfc45b74afc5/curl/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.345907 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z_89079dd8-483b-44ef-81be-6ab712709669/bridge/2.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.346262 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z_89079dd8-483b-44ef-81be-6ab712709669/bridge/1.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.350477 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7f57f9f758-p6p4z_89079dd8-483b-44ef-81be-6ab712709669/sg-core/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.363867 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t_e23122ba-6ad2-407e-aaeb-7c8f6e27ab54/oauth-proxy/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.371043 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t_e23122ba-6ad2-407e-aaeb-7c8f6e27ab54/bridge/2.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.371238 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t_e23122ba-6ad2-407e-aaeb-7c8f6e27ab54/bridge/1.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.375240 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-zmv9t_e23122ba-6ad2-407e-aaeb-7c8f6e27ab54/sg-core/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.383896 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm_5ab338f9-d819-4ea4-9298-e9b521d0d494/bridge/2.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.384052 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm_5ab338f9-d819-4ea4-9298-e9b521d0d494/bridge/1.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.387716 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-687f7fdd45-vpqsm_5ab338f9-d819-4ea4-9298-e9b521d0d494/sg-core/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.399198 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k_f2986212-b53b-4df9-9cd5-884f35c89cba/oauth-proxy/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.406482 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k_f2986212-b53b-4df9-9cd5-884f35c89cba/bridge/2.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.406823 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k_f2986212-b53b-4df9-9cd5-884f35c89cba/bridge/1.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.410694 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-brq7k_f2986212-b53b-4df9-9cd5-884f35c89cba/sg-core/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.420550 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv_2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7/oauth-proxy/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.425394 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv_2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7/bridge/2.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.425644 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv_2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7/bridge/1.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.429511 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-68pxv_2abb3e5e-d305-4d1b-aa39-8b8938ee7ca7/sg-core/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.450652 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-hzlwg_0661936f-76da-4b08-818a-352bba8bad5c/default-interconnect/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.461317 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-hnfrz_ec93941b-ddbb-42d6-ae36-3c643b48a65b/prometheus-webhook-snmp/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.489404 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elastic-operator-7978d4ccbd-fg5zr_4fcd5f26-6c5c-40b8-9c33-ff3679b1c09f/manager/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.508694 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12216d5-43c7-4e0c-be7a-74aa76900a78/elasticsearch/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.515676 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12216d5-43c7-4e0c-be7a-74aa76900a78/elastic-internal-init-filesystem/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.518890 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12216d5-43c7-4e0c-be7a-74aa76900a78/elastic-internal-suspend/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.528248 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_infrawatch-operators-9skl8_9a8f73da-3e22-403a-914c-cada7a1ef592/registry-server/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.538022 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_interconnect-operator-78b9bd8798-62b8q_28455f19-bea4-4979-bc01-e9ca6f14c7e6/interconnect-operator/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.549887 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_481848f8-834a-47be-9301-1153fcbc51ef/prometheus/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.558045 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_481848f8-834a-47be-9301-1153fcbc51ef/config-reloader/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.564207 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_481848f8-834a-47be-9301-1153fcbc51ef/oauth-proxy/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.571516 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_481848f8-834a-47be-9301-1153fcbc51ef/init-config-reloader/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.622724 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7/docker-build/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.627395 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7/git-clone/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.634104 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_dfd2bcdc-4323-42c7-b39f-0ccabbce6dc7/manage-dockerfile/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.647379 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_37e04609-ba47-4c81-bd8f-40f2342d42d5/qdr/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.661448 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_9001b215-32b9-49f3-bb75-fd770950053e/docker-build/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.666052 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_9001b215-32b9-49f3-bb75-fd770950053e/git-clone/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.674770 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_9001b215-32b9-49f3-bb75-fd770950053e/manage-dockerfile/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.734773 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_568d37ef-0166-4215-b0c1-ed9c9db7a3a1/docker-build/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.741497 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_568d37ef-0166-4215-b0c1-ed9c9db7a3a1/git-clone/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.748909 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_568d37ef-0166-4215-b0c1-ed9c9db7a3a1/manage-dockerfile/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.939954 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-84d7cb46fc-c4c9r_7ede4c81-63e3-44ba-8e96-9dcb8c34adce/operator/0.log" Jan 21 10:26:08 crc kubenswrapper[5119]: I0121 10:26:08.953418 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_1154d85d-dc29-49ea-9f9d-e5264f980b9c/docker-build/0.log" Jan 21 10:26:09 crc kubenswrapper[5119]: I0121 10:26:09.002054 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_1154d85d-dc29-49ea-9f9d-e5264f980b9c/git-clone/0.log" Jan 21 10:26:09 crc kubenswrapper[5119]: I0121 10:26:09.010281 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_1154d85d-dc29-49ea-9f9d-e5264f980b9c/manage-dockerfile/0.log" Jan 21 10:26:09 crc kubenswrapper[5119]: I0121 10:26:09.049685 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_b392096e-f869-42e4-b405-995e0adf0568/docker-build/0.log" Jan 21 10:26:09 crc kubenswrapper[5119]: I0121 10:26:09.054709 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_b392096e-f869-42e4-b405-995e0adf0568/git-clone/0.log" Jan 21 10:26:09 crc kubenswrapper[5119]: I0121 10:26:09.060003 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_b392096e-f869-42e4-b405-995e0adf0568/manage-dockerfile/0.log" Jan 21 10:26:09 crc kubenswrapper[5119]: I0121 10:26:09.102942 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_d31f303f-6cf4-4177-904a-97d7409af8e3/docker-build/0.log" Jan 21 10:26:09 crc kubenswrapper[5119]: I0121 10:26:09.107205 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_d31f303f-6cf4-4177-904a-97d7409af8e3/git-clone/0.log" Jan 21 10:26:09 crc kubenswrapper[5119]: I0121 10:26:09.114734 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_d31f303f-6cf4-4177-904a-97d7409af8e3/manage-dockerfile/0.log" Jan 21 10:26:09 crc kubenswrapper[5119]: I0121 10:26:09.168473 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_917a0b38-23e4-466d-8e05-434245795a3e/docker-build/0.log" Jan 21 10:26:09 crc kubenswrapper[5119]: I0121 10:26:09.173536 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_917a0b38-23e4-466d-8e05-434245795a3e/git-clone/0.log" Jan 21 10:26:09 crc kubenswrapper[5119]: I0121 10:26:09.179756 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_917a0b38-23e4-466d-8e05-434245795a3e/manage-dockerfile/0.log" Jan 21 10:26:12 crc kubenswrapper[5119]: I0121 10:26:12.303095 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-57588ddc85-czgbb_c7cc2173-29c2-4a8e-ab2b-7e373d79c484/operator/0.log" Jan 21 10:26:12 crc kubenswrapper[5119]: I0121 10:26:12.326732 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_bbb62a87-888f-486e-9087-557a47d4754c/docker-build/0.log" Jan 21 10:26:12 crc kubenswrapper[5119]: I0121 10:26:12.331812 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_bbb62a87-888f-486e-9087-557a47d4754c/git-clone/0.log" Jan 21 10:26:12 crc kubenswrapper[5119]: I0121 10:26:12.344740 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_bbb62a87-888f-486e-9087-557a47d4754c/manage-dockerfile/0.log" Jan 21 10:26:12 crc kubenswrapper[5119]: I0121 10:26:12.372222 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-srjmg_29b45a36-1894-437c-aa94-b91f1008f40f/smoketest-collectd/0.log" Jan 21 10:26:12 crc kubenswrapper[5119]: I0121 10:26:12.378577 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-srjmg_29b45a36-1894-437c-aa94-b91f1008f40f/smoketest-ceilometer/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.591006 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:26:13 crc kubenswrapper[5119]: E0121 10:26:13.591414 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.786812 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.789870 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/1.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.802810 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lpnb6_20b7f175-32b1-486b-b6c0-8c12a6ad8338/kube-multus-additional-cni-plugins/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.812984 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lpnb6_20b7f175-32b1-486b-b6c0-8c12a6ad8338/egress-router-binary-copy/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.822387 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lpnb6_20b7f175-32b1-486b-b6c0-8c12a6ad8338/cni-plugins/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.829008 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lpnb6_20b7f175-32b1-486b-b6c0-8c12a6ad8338/bond-cni-plugin/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.834787 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lpnb6_20b7f175-32b1-486b-b6c0-8c12a6ad8338/routeoverride-cni/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.840374 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lpnb6_20b7f175-32b1-486b-b6c0-8c12a6ad8338/whereabouts-cni-bincopy/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.855656 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lpnb6_20b7f175-32b1-486b-b6c0-8c12a6ad8338/whereabouts-cni/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.864387 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-9jrsz_1b25e062-a07b-4350-84c9-9247d3a0c144/multus-admission-controller/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.871638 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-9jrsz_1b25e062-a07b-4350-84c9-9247d3a0c144/kube-rbac-proxy/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.892226 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-fk2f6_0e481d9e-6dd0-4c5e-bb9a-33546cb7715d/network-metrics-daemon/0.log" Jan 21 10:26:13 crc kubenswrapper[5119]: I0121 10:26:13.896465 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-fk2f6_0e481d9e-6dd0-4c5e-bb9a-33546cb7715d/kube-rbac-proxy/0.log" Jan 21 10:26:28 crc kubenswrapper[5119]: I0121 10:26:28.591317 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:26:28 crc kubenswrapper[5119]: E0121 10:26:28.592215 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:26:42 crc kubenswrapper[5119]: I0121 10:26:42.594317 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:26:42 crc kubenswrapper[5119]: E0121 10:26:42.595353 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:26:48 crc kubenswrapper[5119]: I0121 10:26:48.891814 5119 scope.go:117] "RemoveContainer" containerID="796ccbbdf71ee62d13365dcd21740c2ad758ddca4f7e6aa802a0b075452278ab" Jan 21 10:26:55 crc kubenswrapper[5119]: I0121 10:26:55.591374 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:26:55 crc kubenswrapper[5119]: E0121 10:26:55.592477 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:27:07 crc kubenswrapper[5119]: I0121 10:27:07.591388 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:27:07 crc kubenswrapper[5119]: E0121 10:27:07.593394 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:27:20 crc kubenswrapper[5119]: I0121 10:27:20.592387 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:27:20 crc kubenswrapper[5119]: E0121 10:27:20.593882 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:27:34 crc kubenswrapper[5119]: I0121 10:27:34.863351 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gw22s"] Jan 21 10:27:34 crc kubenswrapper[5119]: I0121 10:27:34.877454 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8b6af3d-b6ee-4fc0-94f2-17b1121b15af" containerName="oc" Jan 21 10:27:34 crc kubenswrapper[5119]: I0121 10:27:34.877474 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b6af3d-b6ee-4fc0-94f2-17b1121b15af" containerName="oc" Jan 21 10:27:34 crc kubenswrapper[5119]: I0121 10:27:34.877670 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="c8b6af3d-b6ee-4fc0-94f2-17b1121b15af" containerName="oc" Jan 21 10:27:34 crc kubenswrapper[5119]: I0121 10:27:34.891906 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gw22s"] Jan 21 10:27:34 crc kubenswrapper[5119]: I0121 10:27:34.892057 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.004218 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvfhr\" (UniqueName: \"kubernetes.io/projected/beb05a13-d340-4ab5-a754-328c14f28d21-kube-api-access-pvfhr\") pod \"certified-operators-gw22s\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.004274 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-utilities\") pod \"certified-operators-gw22s\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.004295 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-catalog-content\") pod \"certified-operators-gw22s\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.106059 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pvfhr\" (UniqueName: \"kubernetes.io/projected/beb05a13-d340-4ab5-a754-328c14f28d21-kube-api-access-pvfhr\") pod \"certified-operators-gw22s\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.106118 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-utilities\") pod \"certified-operators-gw22s\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.106147 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-catalog-content\") pod \"certified-operators-gw22s\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.106507 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-utilities\") pod \"certified-operators-gw22s\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.106580 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-catalog-content\") pod \"certified-operators-gw22s\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.140709 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvfhr\" (UniqueName: \"kubernetes.io/projected/beb05a13-d340-4ab5-a754-328c14f28d21-kube-api-access-pvfhr\") pod \"certified-operators-gw22s\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.206707 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.590787 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:27:35 crc kubenswrapper[5119]: E0121 10:27:35.591399 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:27:35 crc kubenswrapper[5119]: I0121 10:27:35.700050 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gw22s"] Jan 21 10:27:36 crc kubenswrapper[5119]: I0121 10:27:36.058998 5119 generic.go:358] "Generic (PLEG): container finished" podID="beb05a13-d340-4ab5-a754-328c14f28d21" containerID="f46717e7dd88d0d7f1025355151f823a32ac7677ad404e70d431265f50655bfd" exitCode=0 Jan 21 10:27:36 crc kubenswrapper[5119]: I0121 10:27:36.059070 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw22s" event={"ID":"beb05a13-d340-4ab5-a754-328c14f28d21","Type":"ContainerDied","Data":"f46717e7dd88d0d7f1025355151f823a32ac7677ad404e70d431265f50655bfd"} Jan 21 10:27:36 crc kubenswrapper[5119]: I0121 10:27:36.060399 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw22s" event={"ID":"beb05a13-d340-4ab5-a754-328c14f28d21","Type":"ContainerStarted","Data":"ad9463a88679b9eba2b9b8eb2dd967f57d5175fe4a4c3f7c3cddfb7798e059d8"} Jan 21 10:27:37 crc kubenswrapper[5119]: I0121 10:27:37.075158 5119 generic.go:358] "Generic (PLEG): container finished" podID="beb05a13-d340-4ab5-a754-328c14f28d21" containerID="72911fab591f8987e7b85fece911a3cb08b703d749cd3695635db5931750365f" exitCode=0 Jan 21 10:27:37 crc kubenswrapper[5119]: I0121 10:27:37.075268 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw22s" event={"ID":"beb05a13-d340-4ab5-a754-328c14f28d21","Type":"ContainerDied","Data":"72911fab591f8987e7b85fece911a3cb08b703d749cd3695635db5931750365f"} Jan 21 10:27:38 crc kubenswrapper[5119]: I0121 10:27:38.085207 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw22s" event={"ID":"beb05a13-d340-4ab5-a754-328c14f28d21","Type":"ContainerStarted","Data":"9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8"} Jan 21 10:27:38 crc kubenswrapper[5119]: I0121 10:27:38.108234 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gw22s" podStartSLOduration=3.514155632 podStartE2EDuration="4.10821444s" podCreationTimestamp="2026-01-21 10:27:34 +0000 UTC" firstStartedPulling="2026-01-21 10:27:36.060345961 +0000 UTC m=+1971.728437649" lastFinishedPulling="2026-01-21 10:27:36.654404739 +0000 UTC m=+1972.322496457" observedRunningTime="2026-01-21 10:27:38.104673903 +0000 UTC m=+1973.772765581" watchObservedRunningTime="2026-01-21 10:27:38.10821444 +0000 UTC m=+1973.776306118" Jan 21 10:27:45 crc kubenswrapper[5119]: I0121 10:27:45.207829 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:45 crc kubenswrapper[5119]: I0121 10:27:45.208412 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:45 crc kubenswrapper[5119]: I0121 10:27:45.246298 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:46 crc kubenswrapper[5119]: I0121 10:27:46.214093 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:46 crc kubenswrapper[5119]: I0121 10:27:46.259748 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gw22s"] Jan 21 10:27:46 crc kubenswrapper[5119]: I0121 10:27:46.591234 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:27:46 crc kubenswrapper[5119]: E0121 10:27:46.591698 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:27:48 crc kubenswrapper[5119]: I0121 10:27:48.197395 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gw22s" podUID="beb05a13-d340-4ab5-a754-328c14f28d21" containerName="registry-server" containerID="cri-o://9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8" gracePeriod=2 Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.071650 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.205205 5119 generic.go:358] "Generic (PLEG): container finished" podID="beb05a13-d340-4ab5-a754-328c14f28d21" containerID="9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8" exitCode=0 Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.205309 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw22s" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.205349 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw22s" event={"ID":"beb05a13-d340-4ab5-a754-328c14f28d21","Type":"ContainerDied","Data":"9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8"} Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.205408 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw22s" event={"ID":"beb05a13-d340-4ab5-a754-328c14f28d21","Type":"ContainerDied","Data":"ad9463a88679b9eba2b9b8eb2dd967f57d5175fe4a4c3f7c3cddfb7798e059d8"} Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.205431 5119 scope.go:117] "RemoveContainer" containerID="9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.221996 5119 scope.go:117] "RemoveContainer" containerID="72911fab591f8987e7b85fece911a3cb08b703d749cd3695635db5931750365f" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.230724 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvfhr\" (UniqueName: \"kubernetes.io/projected/beb05a13-d340-4ab5-a754-328c14f28d21-kube-api-access-pvfhr\") pod \"beb05a13-d340-4ab5-a754-328c14f28d21\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.230766 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-utilities\") pod \"beb05a13-d340-4ab5-a754-328c14f28d21\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.230883 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-catalog-content\") pod \"beb05a13-d340-4ab5-a754-328c14f28d21\" (UID: \"beb05a13-d340-4ab5-a754-328c14f28d21\") " Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.232280 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-utilities" (OuterVolumeSpecName: "utilities") pod "beb05a13-d340-4ab5-a754-328c14f28d21" (UID: "beb05a13-d340-4ab5-a754-328c14f28d21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.241909 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb05a13-d340-4ab5-a754-328c14f28d21-kube-api-access-pvfhr" (OuterVolumeSpecName: "kube-api-access-pvfhr") pod "beb05a13-d340-4ab5-a754-328c14f28d21" (UID: "beb05a13-d340-4ab5-a754-328c14f28d21"). InnerVolumeSpecName "kube-api-access-pvfhr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.257520 5119 scope.go:117] "RemoveContainer" containerID="f46717e7dd88d0d7f1025355151f823a32ac7677ad404e70d431265f50655bfd" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.268786 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "beb05a13-d340-4ab5-a754-328c14f28d21" (UID: "beb05a13-d340-4ab5-a754-328c14f28d21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.292765 5119 scope.go:117] "RemoveContainer" containerID="9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8" Jan 21 10:27:49 crc kubenswrapper[5119]: E0121 10:27:49.293302 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8\": container with ID starting with 9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8 not found: ID does not exist" containerID="9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.293332 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8"} err="failed to get container status \"9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8\": rpc error: code = NotFound desc = could not find container \"9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8\": container with ID starting with 9173ddfe385ed90e1560115587c084319677cd31fe6b58f49b50a3f13999a6a8 not found: ID does not exist" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.293384 5119 scope.go:117] "RemoveContainer" containerID="72911fab591f8987e7b85fece911a3cb08b703d749cd3695635db5931750365f" Jan 21 10:27:49 crc kubenswrapper[5119]: E0121 10:27:49.295394 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72911fab591f8987e7b85fece911a3cb08b703d749cd3695635db5931750365f\": container with ID starting with 72911fab591f8987e7b85fece911a3cb08b703d749cd3695635db5931750365f not found: ID does not exist" containerID="72911fab591f8987e7b85fece911a3cb08b703d749cd3695635db5931750365f" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.295417 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72911fab591f8987e7b85fece911a3cb08b703d749cd3695635db5931750365f"} err="failed to get container status \"72911fab591f8987e7b85fece911a3cb08b703d749cd3695635db5931750365f\": rpc error: code = NotFound desc = could not find container \"72911fab591f8987e7b85fece911a3cb08b703d749cd3695635db5931750365f\": container with ID starting with 72911fab591f8987e7b85fece911a3cb08b703d749cd3695635db5931750365f not found: ID does not exist" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.295431 5119 scope.go:117] "RemoveContainer" containerID="f46717e7dd88d0d7f1025355151f823a32ac7677ad404e70d431265f50655bfd" Jan 21 10:27:49 crc kubenswrapper[5119]: E0121 10:27:49.295854 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f46717e7dd88d0d7f1025355151f823a32ac7677ad404e70d431265f50655bfd\": container with ID starting with f46717e7dd88d0d7f1025355151f823a32ac7677ad404e70d431265f50655bfd not found: ID does not exist" containerID="f46717e7dd88d0d7f1025355151f823a32ac7677ad404e70d431265f50655bfd" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.295893 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f46717e7dd88d0d7f1025355151f823a32ac7677ad404e70d431265f50655bfd"} err="failed to get container status \"f46717e7dd88d0d7f1025355151f823a32ac7677ad404e70d431265f50655bfd\": rpc error: code = NotFound desc = could not find container \"f46717e7dd88d0d7f1025355151f823a32ac7677ad404e70d431265f50655bfd\": container with ID starting with f46717e7dd88d0d7f1025355151f823a32ac7677ad404e70d431265f50655bfd not found: ID does not exist" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.333283 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pvfhr\" (UniqueName: \"kubernetes.io/projected/beb05a13-d340-4ab5-a754-328c14f28d21-kube-api-access-pvfhr\") on node \"crc\" DevicePath \"\"" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.333366 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.333379 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beb05a13-d340-4ab5-a754-328c14f28d21-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.541634 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gw22s"] Jan 21 10:27:49 crc kubenswrapper[5119]: I0121 10:27:49.545781 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gw22s"] Jan 21 10:27:50 crc kubenswrapper[5119]: I0121 10:27:50.600230 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beb05a13-d340-4ab5-a754-328c14f28d21" path="/var/lib/kubelet/pods/beb05a13-d340-4ab5-a754-328c14f28d21/volumes" Jan 21 10:27:57 crc kubenswrapper[5119]: I0121 10:27:57.591152 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:27:57 crc kubenswrapper[5119]: E0121 10:27:57.591860 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.170467 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483188-cszvw"] Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.172476 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="beb05a13-d340-4ab5-a754-328c14f28d21" containerName="extract-utilities" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.172503 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb05a13-d340-4ab5-a754-328c14f28d21" containerName="extract-utilities" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.172544 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="beb05a13-d340-4ab5-a754-328c14f28d21" containerName="extract-content" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.172557 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb05a13-d340-4ab5-a754-328c14f28d21" containerName="extract-content" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.172594 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="beb05a13-d340-4ab5-a754-328c14f28d21" containerName="registry-server" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.172637 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb05a13-d340-4ab5-a754-328c14f28d21" containerName="registry-server" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.172944 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="beb05a13-d340-4ab5-a754-328c14f28d21" containerName="registry-server" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.199722 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483188-cszvw"] Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.199908 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483188-cszvw" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.219853 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.220243 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.220457 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.297442 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvznh\" (UniqueName: \"kubernetes.io/projected/9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae-kube-api-access-cvznh\") pod \"auto-csr-approver-29483188-cszvw\" (UID: \"9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae\") " pod="openshift-infra/auto-csr-approver-29483188-cszvw" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.398764 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cvznh\" (UniqueName: \"kubernetes.io/projected/9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae-kube-api-access-cvznh\") pod \"auto-csr-approver-29483188-cszvw\" (UID: \"9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae\") " pod="openshift-infra/auto-csr-approver-29483188-cszvw" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.423529 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvznh\" (UniqueName: \"kubernetes.io/projected/9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae-kube-api-access-cvznh\") pod \"auto-csr-approver-29483188-cszvw\" (UID: \"9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae\") " pod="openshift-infra/auto-csr-approver-29483188-cszvw" Jan 21 10:28:00 crc kubenswrapper[5119]: I0121 10:28:00.551112 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483188-cszvw" Jan 21 10:28:01 crc kubenswrapper[5119]: I0121 10:28:01.012007 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483188-cszvw"] Jan 21 10:28:01 crc kubenswrapper[5119]: I0121 10:28:01.322359 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483188-cszvw" event={"ID":"9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae","Type":"ContainerStarted","Data":"cfb391b11b0b4bf975c47c86103760435b044262713493f1e46fca6e392fe9c6"} Jan 21 10:28:03 crc kubenswrapper[5119]: I0121 10:28:03.338195 5119 generic.go:358] "Generic (PLEG): container finished" podID="9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae" containerID="7a364e3091b25955173d0d4bba67eecc3588fd3daa563abe3cc1c18d22aee89a" exitCode=0 Jan 21 10:28:03 crc kubenswrapper[5119]: I0121 10:28:03.338247 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483188-cszvw" event={"ID":"9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae","Type":"ContainerDied","Data":"7a364e3091b25955173d0d4bba67eecc3588fd3daa563abe3cc1c18d22aee89a"} Jan 21 10:28:04 crc kubenswrapper[5119]: I0121 10:28:04.677535 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483188-cszvw" Jan 21 10:28:04 crc kubenswrapper[5119]: I0121 10:28:04.865422 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvznh\" (UniqueName: \"kubernetes.io/projected/9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae-kube-api-access-cvznh\") pod \"9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae\" (UID: \"9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae\") " Jan 21 10:28:04 crc kubenswrapper[5119]: I0121 10:28:04.870345 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae-kube-api-access-cvznh" (OuterVolumeSpecName: "kube-api-access-cvznh") pod "9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae" (UID: "9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae"). InnerVolumeSpecName "kube-api-access-cvznh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:28:04 crc kubenswrapper[5119]: I0121 10:28:04.967781 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cvznh\" (UniqueName: \"kubernetes.io/projected/9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae-kube-api-access-cvznh\") on node \"crc\" DevicePath \"\"" Jan 21 10:28:05 crc kubenswrapper[5119]: I0121 10:28:05.353285 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483188-cszvw" event={"ID":"9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae","Type":"ContainerDied","Data":"cfb391b11b0b4bf975c47c86103760435b044262713493f1e46fca6e392fe9c6"} Jan 21 10:28:05 crc kubenswrapper[5119]: I0121 10:28:05.353325 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfb391b11b0b4bf975c47c86103760435b044262713493f1e46fca6e392fe9c6" Jan 21 10:28:05 crc kubenswrapper[5119]: I0121 10:28:05.353345 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483188-cszvw" Jan 21 10:28:05 crc kubenswrapper[5119]: I0121 10:28:05.728237 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483182-qwxqt"] Jan 21 10:28:05 crc kubenswrapper[5119]: I0121 10:28:05.733271 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483182-qwxqt"] Jan 21 10:28:06 crc kubenswrapper[5119]: I0121 10:28:06.607060 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39" path="/var/lib/kubelet/pods/4dd1e8a4-08ba-4743-a5f9-3c4b69dd6e39/volumes" Jan 21 10:28:12 crc kubenswrapper[5119]: I0121 10:28:12.591623 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:28:12 crc kubenswrapper[5119]: E0121 10:28:12.592097 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:28:25 crc kubenswrapper[5119]: I0121 10:28:25.590886 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:28:25 crc kubenswrapper[5119]: E0121 10:28:25.591653 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.713117 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jx5x6"] Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.714209 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae" containerName="oc" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.714226 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae" containerName="oc" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.714399 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae" containerName="oc" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.725525 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.726221 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jx5x6"] Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.808818 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-utilities\") pod \"community-operators-jx5x6\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.808869 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-catalog-content\") pod \"community-operators-jx5x6\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.809102 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fz9m\" (UniqueName: \"kubernetes.io/projected/b09a8706-94ae-4c06-8078-aa61e5ab509a-kube-api-access-5fz9m\") pod \"community-operators-jx5x6\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.910263 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5fz9m\" (UniqueName: \"kubernetes.io/projected/b09a8706-94ae-4c06-8078-aa61e5ab509a-kube-api-access-5fz9m\") pod \"community-operators-jx5x6\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.910806 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-utilities\") pod \"community-operators-jx5x6\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.910843 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-catalog-content\") pod \"community-operators-jx5x6\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.911706 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-catalog-content\") pod \"community-operators-jx5x6\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.911714 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-utilities\") pod \"community-operators-jx5x6\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:27 crc kubenswrapper[5119]: I0121 10:28:27.937018 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fz9m\" (UniqueName: \"kubernetes.io/projected/b09a8706-94ae-4c06-8078-aa61e5ab509a-kube-api-access-5fz9m\") pod \"community-operators-jx5x6\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:28 crc kubenswrapper[5119]: I0121 10:28:28.057488 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:28 crc kubenswrapper[5119]: I0121 10:28:28.582986 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jx5x6"] Jan 21 10:28:29 crc kubenswrapper[5119]: I0121 10:28:29.549799 5119 generic.go:358] "Generic (PLEG): container finished" podID="b09a8706-94ae-4c06-8078-aa61e5ab509a" containerID="d0c2e4d442f99f2d2b746922424856663d148e3900d0f10db3bd4bf861cf0f75" exitCode=0 Jan 21 10:28:29 crc kubenswrapper[5119]: I0121 10:28:29.549926 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx5x6" event={"ID":"b09a8706-94ae-4c06-8078-aa61e5ab509a","Type":"ContainerDied","Data":"d0c2e4d442f99f2d2b746922424856663d148e3900d0f10db3bd4bf861cf0f75"} Jan 21 10:28:29 crc kubenswrapper[5119]: I0121 10:28:29.550171 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx5x6" event={"ID":"b09a8706-94ae-4c06-8078-aa61e5ab509a","Type":"ContainerStarted","Data":"6afabc9929c08f4bbbd972c7bd8103d82719fb7837bcbe74686d532fd8817318"} Jan 21 10:28:35 crc kubenswrapper[5119]: I0121 10:28:35.608929 5119 generic.go:358] "Generic (PLEG): container finished" podID="b09a8706-94ae-4c06-8078-aa61e5ab509a" containerID="8f9443a4de9f22e38c676145525d085d4bad4c162bd31d6af40f73031a8d2cd6" exitCode=0 Jan 21 10:28:35 crc kubenswrapper[5119]: I0121 10:28:35.609677 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx5x6" event={"ID":"b09a8706-94ae-4c06-8078-aa61e5ab509a","Type":"ContainerDied","Data":"8f9443a4de9f22e38c676145525d085d4bad4c162bd31d6af40f73031a8d2cd6"} Jan 21 10:28:36 crc kubenswrapper[5119]: I0121 10:28:36.599030 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:28:36 crc kubenswrapper[5119]: E0121 10:28:36.599596 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:28:36 crc kubenswrapper[5119]: I0121 10:28:36.620952 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx5x6" event={"ID":"b09a8706-94ae-4c06-8078-aa61e5ab509a","Type":"ContainerStarted","Data":"06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec"} Jan 21 10:28:36 crc kubenswrapper[5119]: I0121 10:28:36.643511 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jx5x6" podStartSLOduration=4.754724382 podStartE2EDuration="9.643491787s" podCreationTimestamp="2026-01-21 10:28:27 +0000 UTC" firstStartedPulling="2026-01-21 10:28:29.550536792 +0000 UTC m=+2025.218628470" lastFinishedPulling="2026-01-21 10:28:34.439304167 +0000 UTC m=+2030.107395875" observedRunningTime="2026-01-21 10:28:36.640850925 +0000 UTC m=+2032.308942633" watchObservedRunningTime="2026-01-21 10:28:36.643491787 +0000 UTC m=+2032.311583475" Jan 21 10:28:38 crc kubenswrapper[5119]: I0121 10:28:38.058027 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:38 crc kubenswrapper[5119]: I0121 10:28:38.058288 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:38 crc kubenswrapper[5119]: I0121 10:28:38.108981 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:49 crc kubenswrapper[5119]: I0121 10:28:49.046334 5119 scope.go:117] "RemoveContainer" containerID="7cfd1c789cd69d7fb462a8d1ab369dbe2a36afb5c260aae7e576d29bc3fa6c2b" Jan 21 10:28:49 crc kubenswrapper[5119]: I0121 10:28:49.590635 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:28:49 crc kubenswrapper[5119]: E0121 10:28:49.590979 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:28:49 crc kubenswrapper[5119]: I0121 10:28:49.693167 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:49 crc kubenswrapper[5119]: I0121 10:28:49.737167 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jx5x6"] Jan 21 10:28:49 crc kubenswrapper[5119]: I0121 10:28:49.737411 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jx5x6" podUID="b09a8706-94ae-4c06-8078-aa61e5ab509a" containerName="registry-server" containerID="cri-o://06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec" gracePeriod=2 Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.138300 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.260048 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fz9m\" (UniqueName: \"kubernetes.io/projected/b09a8706-94ae-4c06-8078-aa61e5ab509a-kube-api-access-5fz9m\") pod \"b09a8706-94ae-4c06-8078-aa61e5ab509a\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.260244 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-catalog-content\") pod \"b09a8706-94ae-4c06-8078-aa61e5ab509a\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.260280 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-utilities\") pod \"b09a8706-94ae-4c06-8078-aa61e5ab509a\" (UID: \"b09a8706-94ae-4c06-8078-aa61e5ab509a\") " Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.261483 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-utilities" (OuterVolumeSpecName: "utilities") pod "b09a8706-94ae-4c06-8078-aa61e5ab509a" (UID: "b09a8706-94ae-4c06-8078-aa61e5ab509a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.266337 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b09a8706-94ae-4c06-8078-aa61e5ab509a-kube-api-access-5fz9m" (OuterVolumeSpecName: "kube-api-access-5fz9m") pod "b09a8706-94ae-4c06-8078-aa61e5ab509a" (UID: "b09a8706-94ae-4c06-8078-aa61e5ab509a"). InnerVolumeSpecName "kube-api-access-5fz9m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.353752 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b09a8706-94ae-4c06-8078-aa61e5ab509a" (UID: "b09a8706-94ae-4c06-8078-aa61e5ab509a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.361680 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5fz9m\" (UniqueName: \"kubernetes.io/projected/b09a8706-94ae-4c06-8078-aa61e5ab509a-kube-api-access-5fz9m\") on node \"crc\" DevicePath \"\"" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.361715 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.361728 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09a8706-94ae-4c06-8078-aa61e5ab509a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.736012 5119 generic.go:358] "Generic (PLEG): container finished" podID="b09a8706-94ae-4c06-8078-aa61e5ab509a" containerID="06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec" exitCode=0 Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.736199 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx5x6" event={"ID":"b09a8706-94ae-4c06-8078-aa61e5ab509a","Type":"ContainerDied","Data":"06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec"} Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.736299 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx5x6" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.736440 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx5x6" event={"ID":"b09a8706-94ae-4c06-8078-aa61e5ab509a","Type":"ContainerDied","Data":"6afabc9929c08f4bbbd972c7bd8103d82719fb7837bcbe74686d532fd8817318"} Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.736463 5119 scope.go:117] "RemoveContainer" containerID="06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.763278 5119 scope.go:117] "RemoveContainer" containerID="8f9443a4de9f22e38c676145525d085d4bad4c162bd31d6af40f73031a8d2cd6" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.787827 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jx5x6"] Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.798837 5119 scope.go:117] "RemoveContainer" containerID="d0c2e4d442f99f2d2b746922424856663d148e3900d0f10db3bd4bf861cf0f75" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.801979 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jx5x6"] Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.819475 5119 scope.go:117] "RemoveContainer" containerID="06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec" Jan 21 10:28:50 crc kubenswrapper[5119]: E0121 10:28:50.819940 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec\": container with ID starting with 06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec not found: ID does not exist" containerID="06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.819973 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec"} err="failed to get container status \"06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec\": rpc error: code = NotFound desc = could not find container \"06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec\": container with ID starting with 06e575b707cca9a494998de10bc1b2975ed0cfe3273208d387e24c95ae13b6ec not found: ID does not exist" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.819996 5119 scope.go:117] "RemoveContainer" containerID="8f9443a4de9f22e38c676145525d085d4bad4c162bd31d6af40f73031a8d2cd6" Jan 21 10:28:50 crc kubenswrapper[5119]: E0121 10:28:50.820338 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9443a4de9f22e38c676145525d085d4bad4c162bd31d6af40f73031a8d2cd6\": container with ID starting with 8f9443a4de9f22e38c676145525d085d4bad4c162bd31d6af40f73031a8d2cd6 not found: ID does not exist" containerID="8f9443a4de9f22e38c676145525d085d4bad4c162bd31d6af40f73031a8d2cd6" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.820379 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9443a4de9f22e38c676145525d085d4bad4c162bd31d6af40f73031a8d2cd6"} err="failed to get container status \"8f9443a4de9f22e38c676145525d085d4bad4c162bd31d6af40f73031a8d2cd6\": rpc error: code = NotFound desc = could not find container \"8f9443a4de9f22e38c676145525d085d4bad4c162bd31d6af40f73031a8d2cd6\": container with ID starting with 8f9443a4de9f22e38c676145525d085d4bad4c162bd31d6af40f73031a8d2cd6 not found: ID does not exist" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.820395 5119 scope.go:117] "RemoveContainer" containerID="d0c2e4d442f99f2d2b746922424856663d148e3900d0f10db3bd4bf861cf0f75" Jan 21 10:28:50 crc kubenswrapper[5119]: E0121 10:28:50.820664 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0c2e4d442f99f2d2b746922424856663d148e3900d0f10db3bd4bf861cf0f75\": container with ID starting with d0c2e4d442f99f2d2b746922424856663d148e3900d0f10db3bd4bf861cf0f75 not found: ID does not exist" containerID="d0c2e4d442f99f2d2b746922424856663d148e3900d0f10db3bd4bf861cf0f75" Jan 21 10:28:50 crc kubenswrapper[5119]: I0121 10:28:50.820692 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0c2e4d442f99f2d2b746922424856663d148e3900d0f10db3bd4bf861cf0f75"} err="failed to get container status \"d0c2e4d442f99f2d2b746922424856663d148e3900d0f10db3bd4bf861cf0f75\": rpc error: code = NotFound desc = could not find container \"d0c2e4d442f99f2d2b746922424856663d148e3900d0f10db3bd4bf861cf0f75\": container with ID starting with d0c2e4d442f99f2d2b746922424856663d148e3900d0f10db3bd4bf861cf0f75 not found: ID does not exist" Jan 21 10:28:52 crc kubenswrapper[5119]: I0121 10:28:52.606015 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b09a8706-94ae-4c06-8078-aa61e5ab509a" path="/var/lib/kubelet/pods/b09a8706-94ae-4c06-8078-aa61e5ab509a/volumes" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.349494 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7qrhm"] Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.350675 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b09a8706-94ae-4c06-8078-aa61e5ab509a" containerName="extract-utilities" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.350704 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09a8706-94ae-4c06-8078-aa61e5ab509a" containerName="extract-utilities" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.350728 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b09a8706-94ae-4c06-8078-aa61e5ab509a" containerName="extract-content" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.350738 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09a8706-94ae-4c06-8078-aa61e5ab509a" containerName="extract-content" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.350759 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b09a8706-94ae-4c06-8078-aa61e5ab509a" containerName="registry-server" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.350769 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09a8706-94ae-4c06-8078-aa61e5ab509a" containerName="registry-server" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.351008 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="b09a8706-94ae-4c06-8078-aa61e5ab509a" containerName="registry-server" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.364943 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7qrhm"] Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.365123 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.405059 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-catalog-content\") pod \"redhat-operators-7qrhm\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.405100 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-utilities\") pod \"redhat-operators-7qrhm\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.405184 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2x8b\" (UniqueName: \"kubernetes.io/projected/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-kube-api-access-r2x8b\") pod \"redhat-operators-7qrhm\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.505896 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r2x8b\" (UniqueName: \"kubernetes.io/projected/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-kube-api-access-r2x8b\") pod \"redhat-operators-7qrhm\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.505978 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-catalog-content\") pod \"redhat-operators-7qrhm\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.506003 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-utilities\") pod \"redhat-operators-7qrhm\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.506439 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-utilities\") pod \"redhat-operators-7qrhm\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.507000 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-catalog-content\") pod \"redhat-operators-7qrhm\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.526811 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2x8b\" (UniqueName: \"kubernetes.io/projected/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-kube-api-access-r2x8b\") pod \"redhat-operators-7qrhm\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.726464 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:28:53 crc kubenswrapper[5119]: I0121 10:28:53.941842 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7qrhm"] Jan 21 10:28:54 crc kubenswrapper[5119]: I0121 10:28:54.766533 5119 generic.go:358] "Generic (PLEG): container finished" podID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" containerID="7cf0a060656673e5bd236ee9a9da675e976d070afe35e1b3ea7322c3458bdb10" exitCode=0 Jan 21 10:28:54 crc kubenswrapper[5119]: I0121 10:28:54.766574 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qrhm" event={"ID":"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2","Type":"ContainerDied","Data":"7cf0a060656673e5bd236ee9a9da675e976d070afe35e1b3ea7322c3458bdb10"} Jan 21 10:28:54 crc kubenswrapper[5119]: I0121 10:28:54.766945 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qrhm" event={"ID":"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2","Type":"ContainerStarted","Data":"14daaa34e80e2dfa2a80e866c88e035e36619c8f4583b6ad4322069b81f159dc"} Jan 21 10:28:56 crc kubenswrapper[5119]: I0121 10:28:56.792560 5119 generic.go:358] "Generic (PLEG): container finished" podID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" containerID="123c2b04f6983cd17a990f3172f6d2ee9c4983b1210107c7d9543effc3de3de5" exitCode=0 Jan 21 10:28:56 crc kubenswrapper[5119]: I0121 10:28:56.792842 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qrhm" event={"ID":"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2","Type":"ContainerDied","Data":"123c2b04f6983cd17a990f3172f6d2ee9c4983b1210107c7d9543effc3de3de5"} Jan 21 10:28:57 crc kubenswrapper[5119]: I0121 10:28:57.801447 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qrhm" event={"ID":"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2","Type":"ContainerStarted","Data":"147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9"} Jan 21 10:28:57 crc kubenswrapper[5119]: I0121 10:28:57.817985 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7qrhm" podStartSLOduration=3.91411049 podStartE2EDuration="4.817967962s" podCreationTimestamp="2026-01-21 10:28:53 +0000 UTC" firstStartedPulling="2026-01-21 10:28:54.767415353 +0000 UTC m=+2050.435507031" lastFinishedPulling="2026-01-21 10:28:55.671272835 +0000 UTC m=+2051.339364503" observedRunningTime="2026-01-21 10:28:57.814800946 +0000 UTC m=+2053.482892624" watchObservedRunningTime="2026-01-21 10:28:57.817967962 +0000 UTC m=+2053.486059640" Jan 21 10:29:00 crc kubenswrapper[5119]: I0121 10:29:00.596433 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:29:00 crc kubenswrapper[5119]: E0121 10:29:00.597153 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:29:03 crc kubenswrapper[5119]: I0121 10:29:03.727342 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:29:03 crc kubenswrapper[5119]: I0121 10:29:03.728255 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:29:03 crc kubenswrapper[5119]: I0121 10:29:03.767052 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:29:03 crc kubenswrapper[5119]: I0121 10:29:03.883798 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:29:04 crc kubenswrapper[5119]: I0121 10:29:04.006474 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7qrhm"] Jan 21 10:29:05 crc kubenswrapper[5119]: I0121 10:29:05.863045 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7qrhm" podUID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" containerName="registry-server" containerID="cri-o://147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9" gracePeriod=2 Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.790953 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.872365 5119 generic.go:358] "Generic (PLEG): container finished" podID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" containerID="147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9" exitCode=0 Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.872591 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qrhm" event={"ID":"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2","Type":"ContainerDied","Data":"147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9"} Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.872657 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qrhm" event={"ID":"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2","Type":"ContainerDied","Data":"14daaa34e80e2dfa2a80e866c88e035e36619c8f4583b6ad4322069b81f159dc"} Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.872677 5119 scope.go:117] "RemoveContainer" containerID="147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.872867 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qrhm" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.895771 5119 scope.go:117] "RemoveContainer" containerID="123c2b04f6983cd17a990f3172f6d2ee9c4983b1210107c7d9543effc3de3de5" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.907689 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-utilities\") pod \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.907734 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-catalog-content\") pod \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.907814 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2x8b\" (UniqueName: \"kubernetes.io/projected/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-kube-api-access-r2x8b\") pod \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\" (UID: \"e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2\") " Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.910139 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-utilities" (OuterVolumeSpecName: "utilities") pod "e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" (UID: "e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.917969 5119 scope.go:117] "RemoveContainer" containerID="7cf0a060656673e5bd236ee9a9da675e976d070afe35e1b3ea7322c3458bdb10" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.920018 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-kube-api-access-r2x8b" (OuterVolumeSpecName: "kube-api-access-r2x8b") pod "e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" (UID: "e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2"). InnerVolumeSpecName "kube-api-access-r2x8b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.993062 5119 scope.go:117] "RemoveContainer" containerID="147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9" Jan 21 10:29:06 crc kubenswrapper[5119]: E0121 10:29:06.993631 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9\": container with ID starting with 147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9 not found: ID does not exist" containerID="147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.993677 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9"} err="failed to get container status \"147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9\": rpc error: code = NotFound desc = could not find container \"147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9\": container with ID starting with 147e332b64b323b434acc2cab84ca37d48803eea72ca3f071bce25cbac4714a9 not found: ID does not exist" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.993702 5119 scope.go:117] "RemoveContainer" containerID="123c2b04f6983cd17a990f3172f6d2ee9c4983b1210107c7d9543effc3de3de5" Jan 21 10:29:06 crc kubenswrapper[5119]: E0121 10:29:06.994147 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"123c2b04f6983cd17a990f3172f6d2ee9c4983b1210107c7d9543effc3de3de5\": container with ID starting with 123c2b04f6983cd17a990f3172f6d2ee9c4983b1210107c7d9543effc3de3de5 not found: ID does not exist" containerID="123c2b04f6983cd17a990f3172f6d2ee9c4983b1210107c7d9543effc3de3de5" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.994290 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"123c2b04f6983cd17a990f3172f6d2ee9c4983b1210107c7d9543effc3de3de5"} err="failed to get container status \"123c2b04f6983cd17a990f3172f6d2ee9c4983b1210107c7d9543effc3de3de5\": rpc error: code = NotFound desc = could not find container \"123c2b04f6983cd17a990f3172f6d2ee9c4983b1210107c7d9543effc3de3de5\": container with ID starting with 123c2b04f6983cd17a990f3172f6d2ee9c4983b1210107c7d9543effc3de3de5 not found: ID does not exist" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.994380 5119 scope.go:117] "RemoveContainer" containerID="7cf0a060656673e5bd236ee9a9da675e976d070afe35e1b3ea7322c3458bdb10" Jan 21 10:29:06 crc kubenswrapper[5119]: E0121 10:29:06.994744 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf0a060656673e5bd236ee9a9da675e976d070afe35e1b3ea7322c3458bdb10\": container with ID starting with 7cf0a060656673e5bd236ee9a9da675e976d070afe35e1b3ea7322c3458bdb10 not found: ID does not exist" containerID="7cf0a060656673e5bd236ee9a9da675e976d070afe35e1b3ea7322c3458bdb10" Jan 21 10:29:06 crc kubenswrapper[5119]: I0121 10:29:06.994773 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf0a060656673e5bd236ee9a9da675e976d070afe35e1b3ea7322c3458bdb10"} err="failed to get container status \"7cf0a060656673e5bd236ee9a9da675e976d070afe35e1b3ea7322c3458bdb10\": rpc error: code = NotFound desc = could not find container \"7cf0a060656673e5bd236ee9a9da675e976d070afe35e1b3ea7322c3458bdb10\": container with ID starting with 7cf0a060656673e5bd236ee9a9da675e976d070afe35e1b3ea7322c3458bdb10 not found: ID does not exist" Jan 21 10:29:07 crc kubenswrapper[5119]: I0121 10:29:07.009894 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:29:07 crc kubenswrapper[5119]: I0121 10:29:07.010178 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r2x8b\" (UniqueName: \"kubernetes.io/projected/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-kube-api-access-r2x8b\") on node \"crc\" DevicePath \"\"" Jan 21 10:29:07 crc kubenswrapper[5119]: I0121 10:29:07.021726 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" (UID: "e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:29:07 crc kubenswrapper[5119]: I0121 10:29:07.111725 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:29:07 crc kubenswrapper[5119]: I0121 10:29:07.205512 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7qrhm"] Jan 21 10:29:07 crc kubenswrapper[5119]: I0121 10:29:07.213237 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7qrhm"] Jan 21 10:29:08 crc kubenswrapper[5119]: I0121 10:29:08.601298 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" path="/var/lib/kubelet/pods/e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2/volumes" Jan 21 10:29:15 crc kubenswrapper[5119]: I0121 10:29:15.591800 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:29:15 crc kubenswrapper[5119]: E0121 10:29:15.592416 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:29:30 crc kubenswrapper[5119]: I0121 10:29:30.591454 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:29:31 crc kubenswrapper[5119]: I0121 10:29:31.079475 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"db5c74c1b0820ae259270b2d4b510cc9e42408df067b29b3a65632154df8b8bc"} Jan 21 10:29:46 crc kubenswrapper[5119]: I0121 10:29:46.274883 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:29:46 crc kubenswrapper[5119]: I0121 10:29:46.276437 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:29:46 crc kubenswrapper[5119]: I0121 10:29:46.279939 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:29:46 crc kubenswrapper[5119]: I0121 10:29:46.280178 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.140598 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483190-68kpk"] Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.142493 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" containerName="extract-content" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.142553 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" containerName="extract-content" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.142592 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" containerName="registry-server" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.142676 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" containerName="registry-server" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.142726 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" containerName="extract-utilities" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.142739 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" containerName="extract-utilities" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.142952 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e43d8e94-44c2-4dd0-a8a6-dd8bcaa314a2" containerName="registry-server" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.159035 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483190-68kpk"] Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.159217 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483190-68kpk" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.161633 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.165436 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.170830 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.170946 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw"] Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.176047 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.178418 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.179127 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.184266 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw"] Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.288398 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4052c95c-6d61-4b09-8714-4f1f73396f88-secret-volume\") pod \"collect-profiles-29483190-cqjnw\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.288476 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksn4l\" (UniqueName: \"kubernetes.io/projected/4fc676e3-9b82-4156-9b46-c29cbe6b86b8-kube-api-access-ksn4l\") pod \"auto-csr-approver-29483190-68kpk\" (UID: \"4fc676e3-9b82-4156-9b46-c29cbe6b86b8\") " pod="openshift-infra/auto-csr-approver-29483190-68kpk" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.288556 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflkn\" (UniqueName: \"kubernetes.io/projected/4052c95c-6d61-4b09-8714-4f1f73396f88-kube-api-access-hflkn\") pod \"collect-profiles-29483190-cqjnw\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.288623 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4052c95c-6d61-4b09-8714-4f1f73396f88-config-volume\") pod \"collect-profiles-29483190-cqjnw\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.390436 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4052c95c-6d61-4b09-8714-4f1f73396f88-secret-volume\") pod \"collect-profiles-29483190-cqjnw\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.390521 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ksn4l\" (UniqueName: \"kubernetes.io/projected/4fc676e3-9b82-4156-9b46-c29cbe6b86b8-kube-api-access-ksn4l\") pod \"auto-csr-approver-29483190-68kpk\" (UID: \"4fc676e3-9b82-4156-9b46-c29cbe6b86b8\") " pod="openshift-infra/auto-csr-approver-29483190-68kpk" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.390585 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hflkn\" (UniqueName: \"kubernetes.io/projected/4052c95c-6d61-4b09-8714-4f1f73396f88-kube-api-access-hflkn\") pod \"collect-profiles-29483190-cqjnw\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.390670 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4052c95c-6d61-4b09-8714-4f1f73396f88-config-volume\") pod \"collect-profiles-29483190-cqjnw\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.391905 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4052c95c-6d61-4b09-8714-4f1f73396f88-config-volume\") pod \"collect-profiles-29483190-cqjnw\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.399012 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4052c95c-6d61-4b09-8714-4f1f73396f88-secret-volume\") pod \"collect-profiles-29483190-cqjnw\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.407817 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksn4l\" (UniqueName: \"kubernetes.io/projected/4fc676e3-9b82-4156-9b46-c29cbe6b86b8-kube-api-access-ksn4l\") pod \"auto-csr-approver-29483190-68kpk\" (UID: \"4fc676e3-9b82-4156-9b46-c29cbe6b86b8\") " pod="openshift-infra/auto-csr-approver-29483190-68kpk" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.430550 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflkn\" (UniqueName: \"kubernetes.io/projected/4052c95c-6d61-4b09-8714-4f1f73396f88-kube-api-access-hflkn\") pod \"collect-profiles-29483190-cqjnw\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.494392 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483190-68kpk" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.500455 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.974750 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483190-68kpk"] Jan 21 10:30:00 crc kubenswrapper[5119]: I0121 10:30:00.988889 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw"] Jan 21 10:30:01 crc kubenswrapper[5119]: I0121 10:30:01.341485 5119 generic.go:358] "Generic (PLEG): container finished" podID="4052c95c-6d61-4b09-8714-4f1f73396f88" containerID="4d8d89472f6442d8cf5097963b89ef263e228b29cd1b3f55f8a6b9a505f131b6" exitCode=0 Jan 21 10:30:01 crc kubenswrapper[5119]: I0121 10:30:01.341554 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" event={"ID":"4052c95c-6d61-4b09-8714-4f1f73396f88","Type":"ContainerDied","Data":"4d8d89472f6442d8cf5097963b89ef263e228b29cd1b3f55f8a6b9a505f131b6"} Jan 21 10:30:01 crc kubenswrapper[5119]: I0121 10:30:01.341869 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" event={"ID":"4052c95c-6d61-4b09-8714-4f1f73396f88","Type":"ContainerStarted","Data":"d32a3ebb57c144990ea745fb1757440cc5a6975bfada509bacedb4aa256d6032"} Jan 21 10:30:01 crc kubenswrapper[5119]: I0121 10:30:01.343985 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483190-68kpk" event={"ID":"4fc676e3-9b82-4156-9b46-c29cbe6b86b8","Type":"ContainerStarted","Data":"490e3c59c2e7d6107f62ae921f7f40486833c20af8ad0c3f3b4705c862e62e39"} Jan 21 10:30:02 crc kubenswrapper[5119]: I0121 10:30:02.614131 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:02 crc kubenswrapper[5119]: I0121 10:30:02.728579 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4052c95c-6d61-4b09-8714-4f1f73396f88-config-volume\") pod \"4052c95c-6d61-4b09-8714-4f1f73396f88\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " Jan 21 10:30:02 crc kubenswrapper[5119]: I0121 10:30:02.728772 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4052c95c-6d61-4b09-8714-4f1f73396f88-secret-volume\") pod \"4052c95c-6d61-4b09-8714-4f1f73396f88\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " Jan 21 10:30:02 crc kubenswrapper[5119]: I0121 10:30:02.728800 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hflkn\" (UniqueName: \"kubernetes.io/projected/4052c95c-6d61-4b09-8714-4f1f73396f88-kube-api-access-hflkn\") pod \"4052c95c-6d61-4b09-8714-4f1f73396f88\" (UID: \"4052c95c-6d61-4b09-8714-4f1f73396f88\") " Jan 21 10:30:02 crc kubenswrapper[5119]: I0121 10:30:02.730247 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4052c95c-6d61-4b09-8714-4f1f73396f88-config-volume" (OuterVolumeSpecName: "config-volume") pod "4052c95c-6d61-4b09-8714-4f1f73396f88" (UID: "4052c95c-6d61-4b09-8714-4f1f73396f88"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:30:02 crc kubenswrapper[5119]: I0121 10:30:02.735308 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4052c95c-6d61-4b09-8714-4f1f73396f88-kube-api-access-hflkn" (OuterVolumeSpecName: "kube-api-access-hflkn") pod "4052c95c-6d61-4b09-8714-4f1f73396f88" (UID: "4052c95c-6d61-4b09-8714-4f1f73396f88"). InnerVolumeSpecName "kube-api-access-hflkn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:30:02 crc kubenswrapper[5119]: I0121 10:30:02.753768 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4052c95c-6d61-4b09-8714-4f1f73396f88-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4052c95c-6d61-4b09-8714-4f1f73396f88" (UID: "4052c95c-6d61-4b09-8714-4f1f73396f88"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:30:02 crc kubenswrapper[5119]: I0121 10:30:02.831529 5119 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4052c95c-6d61-4b09-8714-4f1f73396f88-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:02 crc kubenswrapper[5119]: I0121 10:30:02.831575 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hflkn\" (UniqueName: \"kubernetes.io/projected/4052c95c-6d61-4b09-8714-4f1f73396f88-kube-api-access-hflkn\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:02 crc kubenswrapper[5119]: I0121 10:30:02.831588 5119 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4052c95c-6d61-4b09-8714-4f1f73396f88-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:03 crc kubenswrapper[5119]: I0121 10:30:03.365526 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" event={"ID":"4052c95c-6d61-4b09-8714-4f1f73396f88","Type":"ContainerDied","Data":"d32a3ebb57c144990ea745fb1757440cc5a6975bfada509bacedb4aa256d6032"} Jan 21 10:30:03 crc kubenswrapper[5119]: I0121 10:30:03.365579 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d32a3ebb57c144990ea745fb1757440cc5a6975bfada509bacedb4aa256d6032" Jan 21 10:30:03 crc kubenswrapper[5119]: I0121 10:30:03.365586 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-cqjnw" Jan 21 10:30:03 crc kubenswrapper[5119]: I0121 10:30:03.682213 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm"] Jan 21 10:30:03 crc kubenswrapper[5119]: I0121 10:30:03.688872 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483145-q8nfm"] Jan 21 10:30:04 crc kubenswrapper[5119]: I0121 10:30:04.374136 5119 generic.go:358] "Generic (PLEG): container finished" podID="4fc676e3-9b82-4156-9b46-c29cbe6b86b8" containerID="a6e761489df7498ad8422973211a8edc6bacc7883be745210fad7dc02b55f00f" exitCode=0 Jan 21 10:30:04 crc kubenswrapper[5119]: I0121 10:30:04.374230 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483190-68kpk" event={"ID":"4fc676e3-9b82-4156-9b46-c29cbe6b86b8","Type":"ContainerDied","Data":"a6e761489df7498ad8422973211a8edc6bacc7883be745210fad7dc02b55f00f"} Jan 21 10:30:04 crc kubenswrapper[5119]: I0121 10:30:04.599481 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f53b6ab7-e57d-4f85-adef-9a60515f8f1f" path="/var/lib/kubelet/pods/f53b6ab7-e57d-4f85-adef-9a60515f8f1f/volumes" Jan 21 10:30:05 crc kubenswrapper[5119]: I0121 10:30:05.619975 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483190-68kpk" Jan 21 10:30:05 crc kubenswrapper[5119]: I0121 10:30:05.776001 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksn4l\" (UniqueName: \"kubernetes.io/projected/4fc676e3-9b82-4156-9b46-c29cbe6b86b8-kube-api-access-ksn4l\") pod \"4fc676e3-9b82-4156-9b46-c29cbe6b86b8\" (UID: \"4fc676e3-9b82-4156-9b46-c29cbe6b86b8\") " Jan 21 10:30:05 crc kubenswrapper[5119]: I0121 10:30:05.781502 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc676e3-9b82-4156-9b46-c29cbe6b86b8-kube-api-access-ksn4l" (OuterVolumeSpecName: "kube-api-access-ksn4l") pod "4fc676e3-9b82-4156-9b46-c29cbe6b86b8" (UID: "4fc676e3-9b82-4156-9b46-c29cbe6b86b8"). InnerVolumeSpecName "kube-api-access-ksn4l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:30:05 crc kubenswrapper[5119]: I0121 10:30:05.877605 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ksn4l\" (UniqueName: \"kubernetes.io/projected/4fc676e3-9b82-4156-9b46-c29cbe6b86b8-kube-api-access-ksn4l\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:06 crc kubenswrapper[5119]: I0121 10:30:06.398890 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483190-68kpk" Jan 21 10:30:06 crc kubenswrapper[5119]: I0121 10:30:06.398916 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483190-68kpk" event={"ID":"4fc676e3-9b82-4156-9b46-c29cbe6b86b8","Type":"ContainerDied","Data":"490e3c59c2e7d6107f62ae921f7f40486833c20af8ad0c3f3b4705c862e62e39"} Jan 21 10:30:06 crc kubenswrapper[5119]: I0121 10:30:06.398960 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="490e3c59c2e7d6107f62ae921f7f40486833c20af8ad0c3f3b4705c862e62e39" Jan 21 10:30:06 crc kubenswrapper[5119]: I0121 10:30:06.676204 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483184-6h8tx"] Jan 21 10:30:06 crc kubenswrapper[5119]: I0121 10:30:06.684803 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483184-6h8tx"] Jan 21 10:30:08 crc kubenswrapper[5119]: I0121 10:30:08.604276 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aee3e16-2b3d-4a8f-92ea-639793f73b1f" path="/var/lib/kubelet/pods/1aee3e16-2b3d-4a8f-92ea-639793f73b1f/volumes" Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.570273 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-db9g6"] Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.571821 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4052c95c-6d61-4b09-8714-4f1f73396f88" containerName="collect-profiles" Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.571946 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4052c95c-6d61-4b09-8714-4f1f73396f88" containerName="collect-profiles" Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.572002 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4fc676e3-9b82-4156-9b46-c29cbe6b86b8" containerName="oc" Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.572011 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc676e3-9b82-4156-9b46-c29cbe6b86b8" containerName="oc" Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.572157 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4052c95c-6d61-4b09-8714-4f1f73396f88" containerName="collect-profiles" Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.572171 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="4fc676e3-9b82-4156-9b46-c29cbe6b86b8" containerName="oc" Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.577101 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-db9g6" Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.603652 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-db9g6"] Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.671450 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpbq\" (UniqueName: \"kubernetes.io/projected/9715a744-29cf-4089-83b8-a04c8ce58370-kube-api-access-7dpbq\") pod \"infrawatch-operators-db9g6\" (UID: \"9715a744-29cf-4089-83b8-a04c8ce58370\") " pod="service-telemetry/infrawatch-operators-db9g6" Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.773514 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpbq\" (UniqueName: \"kubernetes.io/projected/9715a744-29cf-4089-83b8-a04c8ce58370-kube-api-access-7dpbq\") pod \"infrawatch-operators-db9g6\" (UID: \"9715a744-29cf-4089-83b8-a04c8ce58370\") " pod="service-telemetry/infrawatch-operators-db9g6" Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.792126 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpbq\" (UniqueName: \"kubernetes.io/projected/9715a744-29cf-4089-83b8-a04c8ce58370-kube-api-access-7dpbq\") pod \"infrawatch-operators-db9g6\" (UID: \"9715a744-29cf-4089-83b8-a04c8ce58370\") " pod="service-telemetry/infrawatch-operators-db9g6" Jan 21 10:30:35 crc kubenswrapper[5119]: I0121 10:30:35.898063 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-db9g6" Jan 21 10:30:36 crc kubenswrapper[5119]: I0121 10:30:36.346682 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-db9g6"] Jan 21 10:30:36 crc kubenswrapper[5119]: I0121 10:30:36.353827 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:30:36 crc kubenswrapper[5119]: I0121 10:30:36.623799 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-db9g6" event={"ID":"9715a744-29cf-4089-83b8-a04c8ce58370","Type":"ContainerStarted","Data":"11e4ef4de34e88422b64a4e5ecd0915ff83e23e38a3ddda4db38dbca27a764d2"} Jan 21 10:30:36 crc kubenswrapper[5119]: I0121 10:30:36.624038 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-db9g6" event={"ID":"9715a744-29cf-4089-83b8-a04c8ce58370","Type":"ContainerStarted","Data":"625d05cfeb3c37bcb2bb7472fc8bc7c85710b529d511232d03e107d2544b1a06"} Jan 21 10:30:36 crc kubenswrapper[5119]: I0121 10:30:36.643328 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-db9g6" podStartSLOduration=1.549101253 podStartE2EDuration="1.643310292s" podCreationTimestamp="2026-01-21 10:30:35 +0000 UTC" firstStartedPulling="2026-01-21 10:30:36.354005904 +0000 UTC m=+2152.022097582" lastFinishedPulling="2026-01-21 10:30:36.448214923 +0000 UTC m=+2152.116306621" observedRunningTime="2026-01-21 10:30:36.636145657 +0000 UTC m=+2152.304237335" watchObservedRunningTime="2026-01-21 10:30:36.643310292 +0000 UTC m=+2152.311401970" Jan 21 10:30:45 crc kubenswrapper[5119]: I0121 10:30:45.899253 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-db9g6" Jan 21 10:30:45 crc kubenswrapper[5119]: I0121 10:30:45.899680 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-db9g6" Jan 21 10:30:45 crc kubenswrapper[5119]: I0121 10:30:45.928587 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-db9g6" Jan 21 10:30:46 crc kubenswrapper[5119]: I0121 10:30:46.744853 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-db9g6" Jan 21 10:30:47 crc kubenswrapper[5119]: I0121 10:30:47.163376 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-db9g6"] Jan 21 10:30:48 crc kubenswrapper[5119]: I0121 10:30:48.714390 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-db9g6" podUID="9715a744-29cf-4089-83b8-a04c8ce58370" containerName="registry-server" containerID="cri-o://11e4ef4de34e88422b64a4e5ecd0915ff83e23e38a3ddda4db38dbca27a764d2" gracePeriod=2 Jan 21 10:30:49 crc kubenswrapper[5119]: I0121 10:30:49.213590 5119 scope.go:117] "RemoveContainer" containerID="fe9e208109fff8950214b0c4015d37f36fafecb051869c105ddb4281ce62120a" Jan 21 10:30:49 crc kubenswrapper[5119]: I0121 10:30:49.344945 5119 scope.go:117] "RemoveContainer" containerID="fe499b7174c2bdcf92728788c1ebaa1347a73922495315eb8a10eb6fd6049e8b" Jan 21 10:30:52 crc kubenswrapper[5119]: I0121 10:30:52.757282 5119 generic.go:358] "Generic (PLEG): container finished" podID="9715a744-29cf-4089-83b8-a04c8ce58370" containerID="11e4ef4de34e88422b64a4e5ecd0915ff83e23e38a3ddda4db38dbca27a764d2" exitCode=0 Jan 21 10:30:52 crc kubenswrapper[5119]: I0121 10:30:52.757451 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-db9g6" event={"ID":"9715a744-29cf-4089-83b8-a04c8ce58370","Type":"ContainerDied","Data":"11e4ef4de34e88422b64a4e5ecd0915ff83e23e38a3ddda4db38dbca27a764d2"} Jan 21 10:30:52 crc kubenswrapper[5119]: I0121 10:30:52.838733 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-db9g6" Jan 21 10:30:52 crc kubenswrapper[5119]: I0121 10:30:52.954352 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dpbq\" (UniqueName: \"kubernetes.io/projected/9715a744-29cf-4089-83b8-a04c8ce58370-kube-api-access-7dpbq\") pod \"9715a744-29cf-4089-83b8-a04c8ce58370\" (UID: \"9715a744-29cf-4089-83b8-a04c8ce58370\") " Jan 21 10:30:52 crc kubenswrapper[5119]: I0121 10:30:52.960962 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9715a744-29cf-4089-83b8-a04c8ce58370-kube-api-access-7dpbq" (OuterVolumeSpecName: "kube-api-access-7dpbq") pod "9715a744-29cf-4089-83b8-a04c8ce58370" (UID: "9715a744-29cf-4089-83b8-a04c8ce58370"). InnerVolumeSpecName "kube-api-access-7dpbq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:30:53 crc kubenswrapper[5119]: I0121 10:30:53.056954 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7dpbq\" (UniqueName: \"kubernetes.io/projected/9715a744-29cf-4089-83b8-a04c8ce58370-kube-api-access-7dpbq\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:53 crc kubenswrapper[5119]: I0121 10:30:53.767013 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-db9g6" Jan 21 10:30:53 crc kubenswrapper[5119]: I0121 10:30:53.767012 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-db9g6" event={"ID":"9715a744-29cf-4089-83b8-a04c8ce58370","Type":"ContainerDied","Data":"625d05cfeb3c37bcb2bb7472fc8bc7c85710b529d511232d03e107d2544b1a06"} Jan 21 10:30:53 crc kubenswrapper[5119]: I0121 10:30:53.767136 5119 scope.go:117] "RemoveContainer" containerID="11e4ef4de34e88422b64a4e5ecd0915ff83e23e38a3ddda4db38dbca27a764d2" Jan 21 10:30:53 crc kubenswrapper[5119]: I0121 10:30:53.815317 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-db9g6"] Jan 21 10:30:53 crc kubenswrapper[5119]: I0121 10:30:53.825111 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-db9g6"] Jan 21 10:30:54 crc kubenswrapper[5119]: I0121 10:30:54.598970 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9715a744-29cf-4089-83b8-a04c8ce58370" path="/var/lib/kubelet/pods/9715a744-29cf-4089-83b8-a04c8ce58370/volumes" Jan 21 10:31:49 crc kubenswrapper[5119]: I0121 10:31:49.918867 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:31:49 crc kubenswrapper[5119]: I0121 10:31:49.919525 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.139049 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483192-b482b"] Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.140255 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9715a744-29cf-4089-83b8-a04c8ce58370" containerName="registry-server" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.140268 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="9715a744-29cf-4089-83b8-a04c8ce58370" containerName="registry-server" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.140401 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="9715a744-29cf-4089-83b8-a04c8ce58370" containerName="registry-server" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.149809 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483192-b482b"] Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.150101 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483192-b482b" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.152441 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.152685 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.152871 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.285105 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsx7z\" (UniqueName: \"kubernetes.io/projected/b355af60-23e7-4914-ade4-1003b80567b7-kube-api-access-gsx7z\") pod \"auto-csr-approver-29483192-b482b\" (UID: \"b355af60-23e7-4914-ade4-1003b80567b7\") " pod="openshift-infra/auto-csr-approver-29483192-b482b" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.387103 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsx7z\" (UniqueName: \"kubernetes.io/projected/b355af60-23e7-4914-ade4-1003b80567b7-kube-api-access-gsx7z\") pod \"auto-csr-approver-29483192-b482b\" (UID: \"b355af60-23e7-4914-ade4-1003b80567b7\") " pod="openshift-infra/auto-csr-approver-29483192-b482b" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.407220 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsx7z\" (UniqueName: \"kubernetes.io/projected/b355af60-23e7-4914-ade4-1003b80567b7-kube-api-access-gsx7z\") pod \"auto-csr-approver-29483192-b482b\" (UID: \"b355af60-23e7-4914-ade4-1003b80567b7\") " pod="openshift-infra/auto-csr-approver-29483192-b482b" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.473214 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483192-b482b" Jan 21 10:32:00 crc kubenswrapper[5119]: I0121 10:32:00.720965 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483192-b482b"] Jan 21 10:32:01 crc kubenswrapper[5119]: I0121 10:32:01.317077 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483192-b482b" event={"ID":"b355af60-23e7-4914-ade4-1003b80567b7","Type":"ContainerStarted","Data":"bd6063e64642601f7d8a0e47a048d788f69b10c6bce8a1baf0971ff1faeacc08"} Jan 21 10:32:02 crc kubenswrapper[5119]: I0121 10:32:02.326326 5119 generic.go:358] "Generic (PLEG): container finished" podID="b355af60-23e7-4914-ade4-1003b80567b7" containerID="99ba77bc79444084af67279d38f49d58ffe79a37d38893eccc811ff94a0a8d66" exitCode=0 Jan 21 10:32:02 crc kubenswrapper[5119]: I0121 10:32:02.326570 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483192-b482b" event={"ID":"b355af60-23e7-4914-ade4-1003b80567b7","Type":"ContainerDied","Data":"99ba77bc79444084af67279d38f49d58ffe79a37d38893eccc811ff94a0a8d66"} Jan 21 10:32:03 crc kubenswrapper[5119]: I0121 10:32:03.620860 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483192-b482b" Jan 21 10:32:03 crc kubenswrapper[5119]: I0121 10:32:03.740160 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsx7z\" (UniqueName: \"kubernetes.io/projected/b355af60-23e7-4914-ade4-1003b80567b7-kube-api-access-gsx7z\") pod \"b355af60-23e7-4914-ade4-1003b80567b7\" (UID: \"b355af60-23e7-4914-ade4-1003b80567b7\") " Jan 21 10:32:03 crc kubenswrapper[5119]: I0121 10:32:03.746647 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b355af60-23e7-4914-ade4-1003b80567b7-kube-api-access-gsx7z" (OuterVolumeSpecName: "kube-api-access-gsx7z") pod "b355af60-23e7-4914-ade4-1003b80567b7" (UID: "b355af60-23e7-4914-ade4-1003b80567b7"). InnerVolumeSpecName "kube-api-access-gsx7z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:32:03 crc kubenswrapper[5119]: I0121 10:32:03.843322 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gsx7z\" (UniqueName: \"kubernetes.io/projected/b355af60-23e7-4914-ade4-1003b80567b7-kube-api-access-gsx7z\") on node \"crc\" DevicePath \"\"" Jan 21 10:32:04 crc kubenswrapper[5119]: I0121 10:32:04.340363 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483192-b482b" event={"ID":"b355af60-23e7-4914-ade4-1003b80567b7","Type":"ContainerDied","Data":"bd6063e64642601f7d8a0e47a048d788f69b10c6bce8a1baf0971ff1faeacc08"} Jan 21 10:32:04 crc kubenswrapper[5119]: I0121 10:32:04.340407 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6063e64642601f7d8a0e47a048d788f69b10c6bce8a1baf0971ff1faeacc08" Jan 21 10:32:04 crc kubenswrapper[5119]: I0121 10:32:04.340474 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483192-b482b" Jan 21 10:32:04 crc kubenswrapper[5119]: I0121 10:32:04.681582 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483186-7wqzw"] Jan 21 10:32:04 crc kubenswrapper[5119]: I0121 10:32:04.686357 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483186-7wqzw"] Jan 21 10:32:06 crc kubenswrapper[5119]: I0121 10:32:06.602163 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b6af3d-b6ee-4fc0-94f2-17b1121b15af" path="/var/lib/kubelet/pods/c8b6af3d-b6ee-4fc0-94f2-17b1121b15af/volumes" Jan 21 10:32:19 crc kubenswrapper[5119]: I0121 10:32:19.919791 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:32:19 crc kubenswrapper[5119]: I0121 10:32:19.920413 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:32:49 crc kubenswrapper[5119]: I0121 10:32:49.918654 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:32:49 crc kubenswrapper[5119]: I0121 10:32:49.919216 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:32:49 crc kubenswrapper[5119]: I0121 10:32:49.919266 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:32:49 crc kubenswrapper[5119]: I0121 10:32:49.920095 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db5c74c1b0820ae259270b2d4b510cc9e42408df067b29b3a65632154df8b8bc"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:32:49 crc kubenswrapper[5119]: I0121 10:32:49.920152 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://db5c74c1b0820ae259270b2d4b510cc9e42408df067b29b3a65632154df8b8bc" gracePeriod=600 Jan 21 10:32:50 crc kubenswrapper[5119]: I0121 10:32:50.715906 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="db5c74c1b0820ae259270b2d4b510cc9e42408df067b29b3a65632154df8b8bc" exitCode=0 Jan 21 10:32:50 crc kubenswrapper[5119]: I0121 10:32:50.715977 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"db5c74c1b0820ae259270b2d4b510cc9e42408df067b29b3a65632154df8b8bc"} Jan 21 10:32:50 crc kubenswrapper[5119]: I0121 10:32:50.716593 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7"} Jan 21 10:32:50 crc kubenswrapper[5119]: I0121 10:32:50.716807 5119 scope.go:117] "RemoveContainer" containerID="aeb1e8d8d657a3045b41d5bd8009eb6931ecf3440b80f525691d93348855df75" Jan 21 10:32:52 crc kubenswrapper[5119]: I0121 10:32:52.153672 5119 scope.go:117] "RemoveContainer" containerID="1e0556cfab979244bf8df2f376a0e756047997eb36855b57a357e7c64d492889" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.136848 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483194-t4k25"] Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.138090 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b355af60-23e7-4914-ade4-1003b80567b7" containerName="oc" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.138105 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="b355af60-23e7-4914-ade4-1003b80567b7" containerName="oc" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.138296 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="b355af60-23e7-4914-ade4-1003b80567b7" containerName="oc" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.155415 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483194-t4k25" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.156398 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483194-t4k25"] Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.157464 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.159007 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.159164 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.304442 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vgpr\" (UniqueName: \"kubernetes.io/projected/f687f17a-51f5-4455-86e1-2f55199ed279-kube-api-access-8vgpr\") pod \"auto-csr-approver-29483194-t4k25\" (UID: \"f687f17a-51f5-4455-86e1-2f55199ed279\") " pod="openshift-infra/auto-csr-approver-29483194-t4k25" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.405771 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8vgpr\" (UniqueName: \"kubernetes.io/projected/f687f17a-51f5-4455-86e1-2f55199ed279-kube-api-access-8vgpr\") pod \"auto-csr-approver-29483194-t4k25\" (UID: \"f687f17a-51f5-4455-86e1-2f55199ed279\") " pod="openshift-infra/auto-csr-approver-29483194-t4k25" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.440352 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vgpr\" (UniqueName: \"kubernetes.io/projected/f687f17a-51f5-4455-86e1-2f55199ed279-kube-api-access-8vgpr\") pod \"auto-csr-approver-29483194-t4k25\" (UID: \"f687f17a-51f5-4455-86e1-2f55199ed279\") " pod="openshift-infra/auto-csr-approver-29483194-t4k25" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.482010 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483194-t4k25" Jan 21 10:34:00 crc kubenswrapper[5119]: I0121 10:34:00.687848 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483194-t4k25"] Jan 21 10:34:01 crc kubenswrapper[5119]: I0121 10:34:01.281907 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483194-t4k25" event={"ID":"f687f17a-51f5-4455-86e1-2f55199ed279","Type":"ContainerStarted","Data":"8a65a299a27a0242a9aecca27c71696338c3978c0f7fc150e44539ee8597ba1a"} Jan 21 10:34:02 crc kubenswrapper[5119]: I0121 10:34:02.290387 5119 generic.go:358] "Generic (PLEG): container finished" podID="f687f17a-51f5-4455-86e1-2f55199ed279" containerID="ca7e82efc144e2fa922ae479cb25773646ec130dbf7f3ba01303684c495617fd" exitCode=0 Jan 21 10:34:02 crc kubenswrapper[5119]: I0121 10:34:02.290457 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483194-t4k25" event={"ID":"f687f17a-51f5-4455-86e1-2f55199ed279","Type":"ContainerDied","Data":"ca7e82efc144e2fa922ae479cb25773646ec130dbf7f3ba01303684c495617fd"} Jan 21 10:34:03 crc kubenswrapper[5119]: I0121 10:34:03.581341 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483194-t4k25" Jan 21 10:34:03 crc kubenswrapper[5119]: I0121 10:34:03.656406 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vgpr\" (UniqueName: \"kubernetes.io/projected/f687f17a-51f5-4455-86e1-2f55199ed279-kube-api-access-8vgpr\") pod \"f687f17a-51f5-4455-86e1-2f55199ed279\" (UID: \"f687f17a-51f5-4455-86e1-2f55199ed279\") " Jan 21 10:34:03 crc kubenswrapper[5119]: I0121 10:34:03.664422 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f687f17a-51f5-4455-86e1-2f55199ed279-kube-api-access-8vgpr" (OuterVolumeSpecName: "kube-api-access-8vgpr") pod "f687f17a-51f5-4455-86e1-2f55199ed279" (UID: "f687f17a-51f5-4455-86e1-2f55199ed279"). InnerVolumeSpecName "kube-api-access-8vgpr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:34:03 crc kubenswrapper[5119]: I0121 10:34:03.758323 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8vgpr\" (UniqueName: \"kubernetes.io/projected/f687f17a-51f5-4455-86e1-2f55199ed279-kube-api-access-8vgpr\") on node \"crc\" DevicePath \"\"" Jan 21 10:34:04 crc kubenswrapper[5119]: I0121 10:34:04.310720 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483194-t4k25" event={"ID":"f687f17a-51f5-4455-86e1-2f55199ed279","Type":"ContainerDied","Data":"8a65a299a27a0242a9aecca27c71696338c3978c0f7fc150e44539ee8597ba1a"} Jan 21 10:34:04 crc kubenswrapper[5119]: I0121 10:34:04.310765 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a65a299a27a0242a9aecca27c71696338c3978c0f7fc150e44539ee8597ba1a" Jan 21 10:34:04 crc kubenswrapper[5119]: I0121 10:34:04.310828 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483194-t4k25" Jan 21 10:34:04 crc kubenswrapper[5119]: I0121 10:34:04.643817 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483188-cszvw"] Jan 21 10:34:04 crc kubenswrapper[5119]: I0121 10:34:04.649310 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483188-cszvw"] Jan 21 10:34:06 crc kubenswrapper[5119]: I0121 10:34:06.612809 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae" path="/var/lib/kubelet/pods/9f17d6e4-dcf0-4543-a28e-aeff2c22d3ae/volumes" Jan 21 10:34:46 crc kubenswrapper[5119]: I0121 10:34:46.396578 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:34:46 crc kubenswrapper[5119]: I0121 10:34:46.396688 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:34:46 crc kubenswrapper[5119]: I0121 10:34:46.400721 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:34:46 crc kubenswrapper[5119]: I0121 10:34:46.400739 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:34:52 crc kubenswrapper[5119]: I0121 10:34:52.263814 5119 scope.go:117] "RemoveContainer" containerID="7a364e3091b25955173d0d4bba67eecc3588fd3daa563abe3cc1c18d22aee89a" Jan 21 10:35:19 crc kubenswrapper[5119]: I0121 10:35:19.919108 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:35:19 crc kubenswrapper[5119]: I0121 10:35:19.919664 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:35:44 crc kubenswrapper[5119]: I0121 10:35:44.613809 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-vzd28"] Jan 21 10:35:44 crc kubenswrapper[5119]: I0121 10:35:44.617315 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f687f17a-51f5-4455-86e1-2f55199ed279" containerName="oc" Jan 21 10:35:44 crc kubenswrapper[5119]: I0121 10:35:44.617345 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="f687f17a-51f5-4455-86e1-2f55199ed279" containerName="oc" Jan 21 10:35:44 crc kubenswrapper[5119]: I0121 10:35:44.617494 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="f687f17a-51f5-4455-86e1-2f55199ed279" containerName="oc" Jan 21 10:35:44 crc kubenswrapper[5119]: I0121 10:35:44.628987 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vzd28" Jan 21 10:35:44 crc kubenswrapper[5119]: I0121 10:35:44.629318 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-vzd28"] Jan 21 10:35:44 crc kubenswrapper[5119]: I0121 10:35:44.775382 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c4gw\" (UniqueName: \"kubernetes.io/projected/458c93cf-226c-4ea7-9e7f-109bd85c080f-kube-api-access-4c4gw\") pod \"infrawatch-operators-vzd28\" (UID: \"458c93cf-226c-4ea7-9e7f-109bd85c080f\") " pod="service-telemetry/infrawatch-operators-vzd28" Jan 21 10:35:44 crc kubenswrapper[5119]: I0121 10:35:44.877165 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4c4gw\" (UniqueName: \"kubernetes.io/projected/458c93cf-226c-4ea7-9e7f-109bd85c080f-kube-api-access-4c4gw\") pod \"infrawatch-operators-vzd28\" (UID: \"458c93cf-226c-4ea7-9e7f-109bd85c080f\") " pod="service-telemetry/infrawatch-operators-vzd28" Jan 21 10:35:44 crc kubenswrapper[5119]: I0121 10:35:44.922413 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c4gw\" (UniqueName: \"kubernetes.io/projected/458c93cf-226c-4ea7-9e7f-109bd85c080f-kube-api-access-4c4gw\") pod \"infrawatch-operators-vzd28\" (UID: \"458c93cf-226c-4ea7-9e7f-109bd85c080f\") " pod="service-telemetry/infrawatch-operators-vzd28" Jan 21 10:35:44 crc kubenswrapper[5119]: I0121 10:35:44.953762 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vzd28" Jan 21 10:35:45 crc kubenswrapper[5119]: I0121 10:35:45.155737 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-vzd28"] Jan 21 10:35:45 crc kubenswrapper[5119]: I0121 10:35:45.158701 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:35:46 crc kubenswrapper[5119]: I0121 10:35:46.152510 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vzd28" event={"ID":"458c93cf-226c-4ea7-9e7f-109bd85c080f","Type":"ContainerStarted","Data":"355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5"} Jan 21 10:35:46 crc kubenswrapper[5119]: I0121 10:35:46.152790 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vzd28" event={"ID":"458c93cf-226c-4ea7-9e7f-109bd85c080f","Type":"ContainerStarted","Data":"ebb2336745678b728219d57c30fc353baaa7108f9562ccbeaf94e135ef4c54c1"} Jan 21 10:35:46 crc kubenswrapper[5119]: I0121 10:35:46.172520 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-vzd28" podStartSLOduration=2.085116609 podStartE2EDuration="2.172500138s" podCreationTimestamp="2026-01-21 10:35:44 +0000 UTC" firstStartedPulling="2026-01-21 10:35:45.158903734 +0000 UTC m=+2460.826995412" lastFinishedPulling="2026-01-21 10:35:45.246287263 +0000 UTC m=+2460.914378941" observedRunningTime="2026-01-21 10:35:46.170042001 +0000 UTC m=+2461.838133699" watchObservedRunningTime="2026-01-21 10:35:46.172500138 +0000 UTC m=+2461.840591816" Jan 21 10:35:49 crc kubenswrapper[5119]: I0121 10:35:49.918483 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:35:49 crc kubenswrapper[5119]: I0121 10:35:49.920234 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:35:54 crc kubenswrapper[5119]: I0121 10:35:54.954794 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-vzd28" Jan 21 10:35:54 crc kubenswrapper[5119]: I0121 10:35:54.956595 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-vzd28" Jan 21 10:35:54 crc kubenswrapper[5119]: I0121 10:35:54.996945 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-vzd28" Jan 21 10:35:55 crc kubenswrapper[5119]: I0121 10:35:55.248578 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-vzd28" Jan 21 10:35:56 crc kubenswrapper[5119]: I0121 10:35:56.208017 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-vzd28"] Jan 21 10:35:57 crc kubenswrapper[5119]: I0121 10:35:57.231008 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-vzd28" podUID="458c93cf-226c-4ea7-9e7f-109bd85c080f" containerName="registry-server" containerID="cri-o://355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5" gracePeriod=2 Jan 21 10:35:57 crc kubenswrapper[5119]: I0121 10:35:57.610907 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vzd28" Jan 21 10:35:57 crc kubenswrapper[5119]: I0121 10:35:57.667104 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c4gw\" (UniqueName: \"kubernetes.io/projected/458c93cf-226c-4ea7-9e7f-109bd85c080f-kube-api-access-4c4gw\") pod \"458c93cf-226c-4ea7-9e7f-109bd85c080f\" (UID: \"458c93cf-226c-4ea7-9e7f-109bd85c080f\") " Jan 21 10:35:57 crc kubenswrapper[5119]: I0121 10:35:57.673061 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458c93cf-226c-4ea7-9e7f-109bd85c080f-kube-api-access-4c4gw" (OuterVolumeSpecName: "kube-api-access-4c4gw") pod "458c93cf-226c-4ea7-9e7f-109bd85c080f" (UID: "458c93cf-226c-4ea7-9e7f-109bd85c080f"). InnerVolumeSpecName "kube-api-access-4c4gw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:35:57 crc kubenswrapper[5119]: I0121 10:35:57.770440 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4c4gw\" (UniqueName: \"kubernetes.io/projected/458c93cf-226c-4ea7-9e7f-109bd85c080f-kube-api-access-4c4gw\") on node \"crc\" DevicePath \"\"" Jan 21 10:35:58 crc kubenswrapper[5119]: I0121 10:35:58.239711 5119 generic.go:358] "Generic (PLEG): container finished" podID="458c93cf-226c-4ea7-9e7f-109bd85c080f" containerID="355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5" exitCode=0 Jan 21 10:35:58 crc kubenswrapper[5119]: I0121 10:35:58.239869 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vzd28" event={"ID":"458c93cf-226c-4ea7-9e7f-109bd85c080f","Type":"ContainerDied","Data":"355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5"} Jan 21 10:35:58 crc kubenswrapper[5119]: I0121 10:35:58.239900 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vzd28" event={"ID":"458c93cf-226c-4ea7-9e7f-109bd85c080f","Type":"ContainerDied","Data":"ebb2336745678b728219d57c30fc353baaa7108f9562ccbeaf94e135ef4c54c1"} Jan 21 10:35:58 crc kubenswrapper[5119]: I0121 10:35:58.239920 5119 scope.go:117] "RemoveContainer" containerID="355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5" Jan 21 10:35:58 crc kubenswrapper[5119]: I0121 10:35:58.240065 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vzd28" Jan 21 10:35:58 crc kubenswrapper[5119]: I0121 10:35:58.274715 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-vzd28"] Jan 21 10:35:58 crc kubenswrapper[5119]: I0121 10:35:58.281162 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-vzd28"] Jan 21 10:35:58 crc kubenswrapper[5119]: I0121 10:35:58.285144 5119 scope.go:117] "RemoveContainer" containerID="355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5" Jan 21 10:35:58 crc kubenswrapper[5119]: E0121 10:35:58.285522 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5\": container with ID starting with 355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5 not found: ID does not exist" containerID="355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5" Jan 21 10:35:58 crc kubenswrapper[5119]: I0121 10:35:58.285561 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5"} err="failed to get container status \"355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5\": rpc error: code = NotFound desc = could not find container \"355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5\": container with ID starting with 355217a9a7a68cdc0851a97b030685e2b260c0a96e9f4759c9d3fc40b30b0be5 not found: ID does not exist" Jan 21 10:35:58 crc kubenswrapper[5119]: I0121 10:35:58.599430 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="458c93cf-226c-4ea7-9e7f-109bd85c080f" path="/var/lib/kubelet/pods/458c93cf-226c-4ea7-9e7f-109bd85c080f/volumes" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.133369 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483196-8bpq9"] Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.134036 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="458c93cf-226c-4ea7-9e7f-109bd85c080f" containerName="registry-server" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.134052 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="458c93cf-226c-4ea7-9e7f-109bd85c080f" containerName="registry-server" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.134180 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="458c93cf-226c-4ea7-9e7f-109bd85c080f" containerName="registry-server" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.147316 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483196-8bpq9"] Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.147470 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483196-8bpq9" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.150648 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.150973 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.151055 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.314619 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8kx5\" (UniqueName: \"kubernetes.io/projected/ae15fc52-c0a6-44d6-9d27-494edd4f4f51-kube-api-access-z8kx5\") pod \"auto-csr-approver-29483196-8bpq9\" (UID: \"ae15fc52-c0a6-44d6-9d27-494edd4f4f51\") " pod="openshift-infra/auto-csr-approver-29483196-8bpq9" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.415929 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z8kx5\" (UniqueName: \"kubernetes.io/projected/ae15fc52-c0a6-44d6-9d27-494edd4f4f51-kube-api-access-z8kx5\") pod \"auto-csr-approver-29483196-8bpq9\" (UID: \"ae15fc52-c0a6-44d6-9d27-494edd4f4f51\") " pod="openshift-infra/auto-csr-approver-29483196-8bpq9" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.433997 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8kx5\" (UniqueName: \"kubernetes.io/projected/ae15fc52-c0a6-44d6-9d27-494edd4f4f51-kube-api-access-z8kx5\") pod \"auto-csr-approver-29483196-8bpq9\" (UID: \"ae15fc52-c0a6-44d6-9d27-494edd4f4f51\") " pod="openshift-infra/auto-csr-approver-29483196-8bpq9" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.464626 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483196-8bpq9" Jan 21 10:36:00 crc kubenswrapper[5119]: I0121 10:36:00.853925 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483196-8bpq9"] Jan 21 10:36:00 crc kubenswrapper[5119]: W0121 10:36:00.873664 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae15fc52_c0a6_44d6_9d27_494edd4f4f51.slice/crio-dbd2e6254922fbdb9f95da7add22b1e9fb683e4a4fbdd8cd3d111397e4b69354 WatchSource:0}: Error finding container dbd2e6254922fbdb9f95da7add22b1e9fb683e4a4fbdd8cd3d111397e4b69354: Status 404 returned error can't find the container with id dbd2e6254922fbdb9f95da7add22b1e9fb683e4a4fbdd8cd3d111397e4b69354 Jan 21 10:36:01 crc kubenswrapper[5119]: I0121 10:36:01.259792 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483196-8bpq9" event={"ID":"ae15fc52-c0a6-44d6-9d27-494edd4f4f51","Type":"ContainerStarted","Data":"dbd2e6254922fbdb9f95da7add22b1e9fb683e4a4fbdd8cd3d111397e4b69354"} Jan 21 10:36:02 crc kubenswrapper[5119]: I0121 10:36:02.269178 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483196-8bpq9" event={"ID":"ae15fc52-c0a6-44d6-9d27-494edd4f4f51","Type":"ContainerStarted","Data":"cc910c6f352c1246686d27afed13fa91efb0f1244c60e27f121755f9d4a90b90"} Jan 21 10:36:03 crc kubenswrapper[5119]: I0121 10:36:03.278137 5119 generic.go:358] "Generic (PLEG): container finished" podID="ae15fc52-c0a6-44d6-9d27-494edd4f4f51" containerID="cc910c6f352c1246686d27afed13fa91efb0f1244c60e27f121755f9d4a90b90" exitCode=0 Jan 21 10:36:03 crc kubenswrapper[5119]: I0121 10:36:03.278227 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483196-8bpq9" event={"ID":"ae15fc52-c0a6-44d6-9d27-494edd4f4f51","Type":"ContainerDied","Data":"cc910c6f352c1246686d27afed13fa91efb0f1244c60e27f121755f9d4a90b90"} Jan 21 10:36:04 crc kubenswrapper[5119]: I0121 10:36:04.597547 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483196-8bpq9" Jan 21 10:36:04 crc kubenswrapper[5119]: I0121 10:36:04.685683 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8kx5\" (UniqueName: \"kubernetes.io/projected/ae15fc52-c0a6-44d6-9d27-494edd4f4f51-kube-api-access-z8kx5\") pod \"ae15fc52-c0a6-44d6-9d27-494edd4f4f51\" (UID: \"ae15fc52-c0a6-44d6-9d27-494edd4f4f51\") " Jan 21 10:36:04 crc kubenswrapper[5119]: I0121 10:36:04.692314 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae15fc52-c0a6-44d6-9d27-494edd4f4f51-kube-api-access-z8kx5" (OuterVolumeSpecName: "kube-api-access-z8kx5") pod "ae15fc52-c0a6-44d6-9d27-494edd4f4f51" (UID: "ae15fc52-c0a6-44d6-9d27-494edd4f4f51"). InnerVolumeSpecName "kube-api-access-z8kx5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:36:04 crc kubenswrapper[5119]: I0121 10:36:04.786735 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z8kx5\" (UniqueName: \"kubernetes.io/projected/ae15fc52-c0a6-44d6-9d27-494edd4f4f51-kube-api-access-z8kx5\") on node \"crc\" DevicePath \"\"" Jan 21 10:36:05 crc kubenswrapper[5119]: I0121 10:36:05.292257 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483196-8bpq9" Jan 21 10:36:05 crc kubenswrapper[5119]: I0121 10:36:05.292303 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483196-8bpq9" event={"ID":"ae15fc52-c0a6-44d6-9d27-494edd4f4f51","Type":"ContainerDied","Data":"dbd2e6254922fbdb9f95da7add22b1e9fb683e4a4fbdd8cd3d111397e4b69354"} Jan 21 10:36:05 crc kubenswrapper[5119]: I0121 10:36:05.292359 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbd2e6254922fbdb9f95da7add22b1e9fb683e4a4fbdd8cd3d111397e4b69354" Jan 21 10:36:05 crc kubenswrapper[5119]: I0121 10:36:05.340026 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483190-68kpk"] Jan 21 10:36:05 crc kubenswrapper[5119]: I0121 10:36:05.344809 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483190-68kpk"] Jan 21 10:36:06 crc kubenswrapper[5119]: I0121 10:36:06.598448 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fc676e3-9b82-4156-9b46-c29cbe6b86b8" path="/var/lib/kubelet/pods/4fc676e3-9b82-4156-9b46-c29cbe6b86b8/volumes" Jan 21 10:36:19 crc kubenswrapper[5119]: I0121 10:36:19.918842 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:36:19 crc kubenswrapper[5119]: I0121 10:36:19.919433 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:36:19 crc kubenswrapper[5119]: I0121 10:36:19.919489 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:36:19 crc kubenswrapper[5119]: I0121 10:36:19.920285 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:36:19 crc kubenswrapper[5119]: I0121 10:36:19.920353 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" gracePeriod=600 Jan 21 10:36:20 crc kubenswrapper[5119]: E0121 10:36:20.040359 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:36:20 crc kubenswrapper[5119]: I0121 10:36:20.415118 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" exitCode=0 Jan 21 10:36:20 crc kubenswrapper[5119]: I0121 10:36:20.415195 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7"} Jan 21 10:36:20 crc kubenswrapper[5119]: I0121 10:36:20.415970 5119 scope.go:117] "RemoveContainer" containerID="db5c74c1b0820ae259270b2d4b510cc9e42408df067b29b3a65632154df8b8bc" Jan 21 10:36:20 crc kubenswrapper[5119]: I0121 10:36:20.416874 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:36:20 crc kubenswrapper[5119]: E0121 10:36:20.417277 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:36:34 crc kubenswrapper[5119]: I0121 10:36:34.596178 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:36:34 crc kubenswrapper[5119]: E0121 10:36:34.596958 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:36:48 crc kubenswrapper[5119]: I0121 10:36:48.590373 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:36:48 crc kubenswrapper[5119]: E0121 10:36:48.591503 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:36:52 crc kubenswrapper[5119]: I0121 10:36:52.385522 5119 scope.go:117] "RemoveContainer" containerID="a6e761489df7498ad8422973211a8edc6bacc7883be745210fad7dc02b55f00f" Jan 21 10:37:03 crc kubenswrapper[5119]: I0121 10:37:03.591113 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:37:03 crc kubenswrapper[5119]: E0121 10:37:03.591874 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:37:16 crc kubenswrapper[5119]: I0121 10:37:16.591809 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:37:16 crc kubenswrapper[5119]: E0121 10:37:16.593226 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:37:31 crc kubenswrapper[5119]: I0121 10:37:31.591319 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:37:31 crc kubenswrapper[5119]: E0121 10:37:31.592120 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:37:46 crc kubenswrapper[5119]: I0121 10:37:46.595244 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:37:46 crc kubenswrapper[5119]: E0121 10:37:46.595965 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.130337 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483198-fg4gr"] Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.131783 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae15fc52-c0a6-44d6-9d27-494edd4f4f51" containerName="oc" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.131798 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae15fc52-c0a6-44d6-9d27-494edd4f4f51" containerName="oc" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.131957 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae15fc52-c0a6-44d6-9d27-494edd4f4f51" containerName="oc" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.145837 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483198-fg4gr"] Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.145959 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483198-fg4gr" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.148783 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.149676 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.154467 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.241751 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqgn8\" (UniqueName: \"kubernetes.io/projected/e4b01156-a29d-4fa8-ac51-e199710ceee8-kube-api-access-kqgn8\") pod \"auto-csr-approver-29483198-fg4gr\" (UID: \"e4b01156-a29d-4fa8-ac51-e199710ceee8\") " pod="openshift-infra/auto-csr-approver-29483198-fg4gr" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.343070 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kqgn8\" (UniqueName: \"kubernetes.io/projected/e4b01156-a29d-4fa8-ac51-e199710ceee8-kube-api-access-kqgn8\") pod \"auto-csr-approver-29483198-fg4gr\" (UID: \"e4b01156-a29d-4fa8-ac51-e199710ceee8\") " pod="openshift-infra/auto-csr-approver-29483198-fg4gr" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.366892 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqgn8\" (UniqueName: \"kubernetes.io/projected/e4b01156-a29d-4fa8-ac51-e199710ceee8-kube-api-access-kqgn8\") pod \"auto-csr-approver-29483198-fg4gr\" (UID: \"e4b01156-a29d-4fa8-ac51-e199710ceee8\") " pod="openshift-infra/auto-csr-approver-29483198-fg4gr" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.463893 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483198-fg4gr" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.598119 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:38:00 crc kubenswrapper[5119]: E0121 10:38:00.598672 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:38:00 crc kubenswrapper[5119]: I0121 10:38:00.696332 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483198-fg4gr"] Jan 21 10:38:01 crc kubenswrapper[5119]: I0121 10:38:01.221811 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483198-fg4gr" event={"ID":"e4b01156-a29d-4fa8-ac51-e199710ceee8","Type":"ContainerStarted","Data":"ce95eba8f9f6a60585f5fb6d03e56e18f96a39d43684bd04bb907fbfec5454f5"} Jan 21 10:38:02 crc kubenswrapper[5119]: I0121 10:38:02.229951 5119 generic.go:358] "Generic (PLEG): container finished" podID="e4b01156-a29d-4fa8-ac51-e199710ceee8" containerID="9c0277099616d067d9a2fec87ce06e022aab798b649da418c42cafd6ba326e4c" exitCode=0 Jan 21 10:38:02 crc kubenswrapper[5119]: I0121 10:38:02.230016 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483198-fg4gr" event={"ID":"e4b01156-a29d-4fa8-ac51-e199710ceee8","Type":"ContainerDied","Data":"9c0277099616d067d9a2fec87ce06e022aab798b649da418c42cafd6ba326e4c"} Jan 21 10:38:03 crc kubenswrapper[5119]: I0121 10:38:03.447356 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483198-fg4gr" Jan 21 10:38:03 crc kubenswrapper[5119]: I0121 10:38:03.483972 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqgn8\" (UniqueName: \"kubernetes.io/projected/e4b01156-a29d-4fa8-ac51-e199710ceee8-kube-api-access-kqgn8\") pod \"e4b01156-a29d-4fa8-ac51-e199710ceee8\" (UID: \"e4b01156-a29d-4fa8-ac51-e199710ceee8\") " Jan 21 10:38:03 crc kubenswrapper[5119]: I0121 10:38:03.491738 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b01156-a29d-4fa8-ac51-e199710ceee8-kube-api-access-kqgn8" (OuterVolumeSpecName: "kube-api-access-kqgn8") pod "e4b01156-a29d-4fa8-ac51-e199710ceee8" (UID: "e4b01156-a29d-4fa8-ac51-e199710ceee8"). InnerVolumeSpecName "kube-api-access-kqgn8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:38:03 crc kubenswrapper[5119]: I0121 10:38:03.588648 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kqgn8\" (UniqueName: \"kubernetes.io/projected/e4b01156-a29d-4fa8-ac51-e199710ceee8-kube-api-access-kqgn8\") on node \"crc\" DevicePath \"\"" Jan 21 10:38:04 crc kubenswrapper[5119]: I0121 10:38:04.246301 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483198-fg4gr" event={"ID":"e4b01156-a29d-4fa8-ac51-e199710ceee8","Type":"ContainerDied","Data":"ce95eba8f9f6a60585f5fb6d03e56e18f96a39d43684bd04bb907fbfec5454f5"} Jan 21 10:38:04 crc kubenswrapper[5119]: I0121 10:38:04.246372 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce95eba8f9f6a60585f5fb6d03e56e18f96a39d43684bd04bb907fbfec5454f5" Jan 21 10:38:04 crc kubenswrapper[5119]: I0121 10:38:04.246317 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483198-fg4gr" Jan 21 10:38:04 crc kubenswrapper[5119]: I0121 10:38:04.508629 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483192-b482b"] Jan 21 10:38:04 crc kubenswrapper[5119]: I0121 10:38:04.513727 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483192-b482b"] Jan 21 10:38:04 crc kubenswrapper[5119]: I0121 10:38:04.600255 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b355af60-23e7-4914-ade4-1003b80567b7" path="/var/lib/kubelet/pods/b355af60-23e7-4914-ade4-1003b80567b7/volumes" Jan 21 10:38:15 crc kubenswrapper[5119]: I0121 10:38:15.590894 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:38:15 crc kubenswrapper[5119]: E0121 10:38:15.591778 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:38:28 crc kubenswrapper[5119]: I0121 10:38:28.591654 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:38:28 crc kubenswrapper[5119]: E0121 10:38:28.592350 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:38:43 crc kubenswrapper[5119]: I0121 10:38:43.591039 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:38:43 crc kubenswrapper[5119]: E0121 10:38:43.592031 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.065542 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-shk6r"] Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.066485 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4b01156-a29d-4fa8-ac51-e199710ceee8" containerName="oc" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.066506 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b01156-a29d-4fa8-ac51-e199710ceee8" containerName="oc" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.066710 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4b01156-a29d-4fa8-ac51-e199710ceee8" containerName="oc" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.078263 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shk6r"] Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.078431 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.144905 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-utilities\") pod \"community-operators-shk6r\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.145002 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-catalog-content\") pod \"community-operators-shk6r\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.145299 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtwbk\" (UniqueName: \"kubernetes.io/projected/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-kube-api-access-wtwbk\") pod \"community-operators-shk6r\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.246547 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-utilities\") pod \"community-operators-shk6r\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.246636 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-catalog-content\") pod \"community-operators-shk6r\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.246702 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wtwbk\" (UniqueName: \"kubernetes.io/projected/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-kube-api-access-wtwbk\") pod \"community-operators-shk6r\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.247392 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-utilities\") pod \"community-operators-shk6r\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.247634 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-catalog-content\") pod \"community-operators-shk6r\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.277329 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtwbk\" (UniqueName: \"kubernetes.io/projected/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-kube-api-access-wtwbk\") pod \"community-operators-shk6r\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.393858 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:46 crc kubenswrapper[5119]: I0121 10:38:46.682402 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shk6r"] Jan 21 10:38:47 crc kubenswrapper[5119]: I0121 10:38:47.546012 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shk6r" event={"ID":"c21c2c47-acb5-41c1-9a77-ac8510ea51fe","Type":"ContainerStarted","Data":"65d9ed52c9d13f0f5dc4ef28a3d2b16f751221a834c0f57bc11fee717ad664a7"} Jan 21 10:38:48 crc kubenswrapper[5119]: I0121 10:38:48.555977 5119 generic.go:358] "Generic (PLEG): container finished" podID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerID="9785b09d0546bc07cabb73851791664b3e278d377937672506eb5c04afca25c7" exitCode=0 Jan 21 10:38:48 crc kubenswrapper[5119]: I0121 10:38:48.556068 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shk6r" event={"ID":"c21c2c47-acb5-41c1-9a77-ac8510ea51fe","Type":"ContainerDied","Data":"9785b09d0546bc07cabb73851791664b3e278d377937672506eb5c04afca25c7"} Jan 21 10:38:52 crc kubenswrapper[5119]: I0121 10:38:52.516335 5119 scope.go:117] "RemoveContainer" containerID="99ba77bc79444084af67279d38f49d58ffe79a37d38893eccc811ff94a0a8d66" Jan 21 10:38:54 crc kubenswrapper[5119]: I0121 10:38:54.618699 5119 generic.go:358] "Generic (PLEG): container finished" podID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerID="0634f4a6f1b2246ae5e84ea30f22a64fc393eee3be1e87ced16328705e5a429c" exitCode=0 Jan 21 10:38:54 crc kubenswrapper[5119]: I0121 10:38:54.618887 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shk6r" event={"ID":"c21c2c47-acb5-41c1-9a77-ac8510ea51fe","Type":"ContainerDied","Data":"0634f4a6f1b2246ae5e84ea30f22a64fc393eee3be1e87ced16328705e5a429c"} Jan 21 10:38:55 crc kubenswrapper[5119]: I0121 10:38:55.630666 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shk6r" event={"ID":"c21c2c47-acb5-41c1-9a77-ac8510ea51fe","Type":"ContainerStarted","Data":"9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8"} Jan 21 10:38:55 crc kubenswrapper[5119]: I0121 10:38:55.650745 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-shk6r" podStartSLOduration=4.142597433 podStartE2EDuration="9.650716495s" podCreationTimestamp="2026-01-21 10:38:46 +0000 UTC" firstStartedPulling="2026-01-21 10:38:48.556920868 +0000 UTC m=+2644.225012536" lastFinishedPulling="2026-01-21 10:38:54.06503992 +0000 UTC m=+2649.733131598" observedRunningTime="2026-01-21 10:38:55.649285806 +0000 UTC m=+2651.317377494" watchObservedRunningTime="2026-01-21 10:38:55.650716495 +0000 UTC m=+2651.318808173" Jan 21 10:38:56 crc kubenswrapper[5119]: I0121 10:38:56.394778 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:56 crc kubenswrapper[5119]: I0121 10:38:56.395040 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:38:57 crc kubenswrapper[5119]: I0121 10:38:57.435835 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-shk6r" podUID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerName="registry-server" probeResult="failure" output=< Jan 21 10:38:57 crc kubenswrapper[5119]: timeout: failed to connect service ":50051" within 1s Jan 21 10:38:57 crc kubenswrapper[5119]: > Jan 21 10:38:57 crc kubenswrapper[5119]: I0121 10:38:57.591422 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:38:57 crc kubenswrapper[5119]: E0121 10:38:57.591769 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:39:05 crc kubenswrapper[5119]: I0121 10:39:05.034504 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s5l2z"] Jan 21 10:39:05 crc kubenswrapper[5119]: I0121 10:39:05.775689 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5l2z"] Jan 21 10:39:05 crc kubenswrapper[5119]: I0121 10:39:05.775828 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:05 crc kubenswrapper[5119]: I0121 10:39:05.926502 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-utilities\") pod \"redhat-operators-s5l2z\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:05 crc kubenswrapper[5119]: I0121 10:39:05.926553 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-catalog-content\") pod \"redhat-operators-s5l2z\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:05 crc kubenswrapper[5119]: I0121 10:39:05.926683 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8chs7\" (UniqueName: \"kubernetes.io/projected/11dd014b-3d8c-4760-bac9-8499d83150ce-kube-api-access-8chs7\") pod \"redhat-operators-s5l2z\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.027794 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-utilities\") pod \"redhat-operators-s5l2z\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.027849 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-catalog-content\") pod \"redhat-operators-s5l2z\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.027914 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8chs7\" (UniqueName: \"kubernetes.io/projected/11dd014b-3d8c-4760-bac9-8499d83150ce-kube-api-access-8chs7\") pod \"redhat-operators-s5l2z\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.028678 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-catalog-content\") pod \"redhat-operators-s5l2z\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.028770 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-utilities\") pod \"redhat-operators-s5l2z\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.050663 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8chs7\" (UniqueName: \"kubernetes.io/projected/11dd014b-3d8c-4760-bac9-8499d83150ce-kube-api-access-8chs7\") pod \"redhat-operators-s5l2z\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.099157 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.326853 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5l2z"] Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.438751 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.477010 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.719688 5119 generic.go:358] "Generic (PLEG): container finished" podID="11dd014b-3d8c-4760-bac9-8499d83150ce" containerID="c2a3007cc8da8ee194c925278798ae4449d330ff93ebd1f6bb5a536eb1236b24" exitCode=0 Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.719846 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5l2z" event={"ID":"11dd014b-3d8c-4760-bac9-8499d83150ce","Type":"ContainerDied","Data":"c2a3007cc8da8ee194c925278798ae4449d330ff93ebd1f6bb5a536eb1236b24"} Jan 21 10:39:06 crc kubenswrapper[5119]: I0121 10:39:06.719912 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5l2z" event={"ID":"11dd014b-3d8c-4760-bac9-8499d83150ce","Type":"ContainerStarted","Data":"6e64939eaff9cac600d9a85a8e318563c8fe98659471d78ed44fa33c6f22ccc9"} Jan 21 10:39:08 crc kubenswrapper[5119]: I0121 10:39:08.817498 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shk6r"] Jan 21 10:39:08 crc kubenswrapper[5119]: I0121 10:39:08.829523 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-shk6r" podUID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerName="registry-server" containerID="cri-o://9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8" gracePeriod=2 Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.177139 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.273513 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtwbk\" (UniqueName: \"kubernetes.io/projected/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-kube-api-access-wtwbk\") pod \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.273586 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-utilities\") pod \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.274877 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-utilities" (OuterVolumeSpecName: "utilities") pod "c21c2c47-acb5-41c1-9a77-ac8510ea51fe" (UID: "c21c2c47-acb5-41c1-9a77-ac8510ea51fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.274955 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-catalog-content\") pod \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\" (UID: \"c21c2c47-acb5-41c1-9a77-ac8510ea51fe\") " Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.275306 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.280447 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-kube-api-access-wtwbk" (OuterVolumeSpecName: "kube-api-access-wtwbk") pod "c21c2c47-acb5-41c1-9a77-ac8510ea51fe" (UID: "c21c2c47-acb5-41c1-9a77-ac8510ea51fe"). InnerVolumeSpecName "kube-api-access-wtwbk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.324236 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c21c2c47-acb5-41c1-9a77-ac8510ea51fe" (UID: "c21c2c47-acb5-41c1-9a77-ac8510ea51fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.377001 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wtwbk\" (UniqueName: \"kubernetes.io/projected/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-kube-api-access-wtwbk\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.377035 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21c2c47-acb5-41c1-9a77-ac8510ea51fe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.743785 5119 generic.go:358] "Generic (PLEG): container finished" podID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerID="9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8" exitCode=0 Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.743933 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shk6r" event={"ID":"c21c2c47-acb5-41c1-9a77-ac8510ea51fe","Type":"ContainerDied","Data":"9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8"} Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.744065 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shk6r" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.744105 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shk6r" event={"ID":"c21c2c47-acb5-41c1-9a77-ac8510ea51fe","Type":"ContainerDied","Data":"65d9ed52c9d13f0f5dc4ef28a3d2b16f751221a834c0f57bc11fee717ad664a7"} Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.744129 5119 scope.go:117] "RemoveContainer" containerID="9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.747950 5119 generic.go:358] "Generic (PLEG): container finished" podID="11dd014b-3d8c-4760-bac9-8499d83150ce" containerID="5552b45a7962c5b24741b62df86b8e49a4a1acffcce6b81b36cdb3a5cc1416e2" exitCode=0 Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.748023 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5l2z" event={"ID":"11dd014b-3d8c-4760-bac9-8499d83150ce","Type":"ContainerDied","Data":"5552b45a7962c5b24741b62df86b8e49a4a1acffcce6b81b36cdb3a5cc1416e2"} Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.760881 5119 scope.go:117] "RemoveContainer" containerID="0634f4a6f1b2246ae5e84ea30f22a64fc393eee3be1e87ced16328705e5a429c" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.789867 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shk6r"] Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.797376 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-shk6r"] Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.815858 5119 scope.go:117] "RemoveContainer" containerID="9785b09d0546bc07cabb73851791664b3e278d377937672506eb5c04afca25c7" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.840257 5119 scope.go:117] "RemoveContainer" containerID="9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8" Jan 21 10:39:09 crc kubenswrapper[5119]: E0121 10:39:09.840694 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8\": container with ID starting with 9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8 not found: ID does not exist" containerID="9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.840741 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8"} err="failed to get container status \"9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8\": rpc error: code = NotFound desc = could not find container \"9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8\": container with ID starting with 9a35d262451eded14c1fa67d7607232471b64bb6c9bb5a3cb3a349baa6c616a8 not found: ID does not exist" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.840814 5119 scope.go:117] "RemoveContainer" containerID="0634f4a6f1b2246ae5e84ea30f22a64fc393eee3be1e87ced16328705e5a429c" Jan 21 10:39:09 crc kubenswrapper[5119]: E0121 10:39:09.841187 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0634f4a6f1b2246ae5e84ea30f22a64fc393eee3be1e87ced16328705e5a429c\": container with ID starting with 0634f4a6f1b2246ae5e84ea30f22a64fc393eee3be1e87ced16328705e5a429c not found: ID does not exist" containerID="0634f4a6f1b2246ae5e84ea30f22a64fc393eee3be1e87ced16328705e5a429c" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.841228 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0634f4a6f1b2246ae5e84ea30f22a64fc393eee3be1e87ced16328705e5a429c"} err="failed to get container status \"0634f4a6f1b2246ae5e84ea30f22a64fc393eee3be1e87ced16328705e5a429c\": rpc error: code = NotFound desc = could not find container \"0634f4a6f1b2246ae5e84ea30f22a64fc393eee3be1e87ced16328705e5a429c\": container with ID starting with 0634f4a6f1b2246ae5e84ea30f22a64fc393eee3be1e87ced16328705e5a429c not found: ID does not exist" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.841255 5119 scope.go:117] "RemoveContainer" containerID="9785b09d0546bc07cabb73851791664b3e278d377937672506eb5c04afca25c7" Jan 21 10:39:09 crc kubenswrapper[5119]: E0121 10:39:09.842462 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9785b09d0546bc07cabb73851791664b3e278d377937672506eb5c04afca25c7\": container with ID starting with 9785b09d0546bc07cabb73851791664b3e278d377937672506eb5c04afca25c7 not found: ID does not exist" containerID="9785b09d0546bc07cabb73851791664b3e278d377937672506eb5c04afca25c7" Jan 21 10:39:09 crc kubenswrapper[5119]: I0121 10:39:09.842490 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9785b09d0546bc07cabb73851791664b3e278d377937672506eb5c04afca25c7"} err="failed to get container status \"9785b09d0546bc07cabb73851791664b3e278d377937672506eb5c04afca25c7\": rpc error: code = NotFound desc = could not find container \"9785b09d0546bc07cabb73851791664b3e278d377937672506eb5c04afca25c7\": container with ID starting with 9785b09d0546bc07cabb73851791664b3e278d377937672506eb5c04afca25c7 not found: ID does not exist" Jan 21 10:39:10 crc kubenswrapper[5119]: I0121 10:39:10.600420 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" path="/var/lib/kubelet/pods/c21c2c47-acb5-41c1-9a77-ac8510ea51fe/volumes" Jan 21 10:39:10 crc kubenswrapper[5119]: I0121 10:39:10.757433 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5l2z" event={"ID":"11dd014b-3d8c-4760-bac9-8499d83150ce","Type":"ContainerStarted","Data":"5b96281786f97084cb05a503f85765924454f8cab18b186bc08df19c997e36b3"} Jan 21 10:39:10 crc kubenswrapper[5119]: I0121 10:39:10.780347 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s5l2z" podStartSLOduration=3.685651328 podStartE2EDuration="5.78032168s" podCreationTimestamp="2026-01-21 10:39:05 +0000 UTC" firstStartedPulling="2026-01-21 10:39:06.720519597 +0000 UTC m=+2662.388611275" lastFinishedPulling="2026-01-21 10:39:08.815189949 +0000 UTC m=+2664.483281627" observedRunningTime="2026-01-21 10:39:10.777862533 +0000 UTC m=+2666.445954251" watchObservedRunningTime="2026-01-21 10:39:10.78032168 +0000 UTC m=+2666.448413358" Jan 21 10:39:11 crc kubenswrapper[5119]: I0121 10:39:11.590640 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:39:11 crc kubenswrapper[5119]: E0121 10:39:11.591030 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:39:16 crc kubenswrapper[5119]: I0121 10:39:16.100152 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:16 crc kubenswrapper[5119]: I0121 10:39:16.100682 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:16 crc kubenswrapper[5119]: I0121 10:39:16.140448 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:16 crc kubenswrapper[5119]: I0121 10:39:16.844432 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:17 crc kubenswrapper[5119]: I0121 10:39:17.260684 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5l2z"] Jan 21 10:39:18 crc kubenswrapper[5119]: I0121 10:39:18.813761 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s5l2z" podUID="11dd014b-3d8c-4760-bac9-8499d83150ce" containerName="registry-server" containerID="cri-o://5b96281786f97084cb05a503f85765924454f8cab18b186bc08df19c997e36b3" gracePeriod=2 Jan 21 10:39:21 crc kubenswrapper[5119]: I0121 10:39:21.838442 5119 generic.go:358] "Generic (PLEG): container finished" podID="11dd014b-3d8c-4760-bac9-8499d83150ce" containerID="5b96281786f97084cb05a503f85765924454f8cab18b186bc08df19c997e36b3" exitCode=0 Jan 21 10:39:21 crc kubenswrapper[5119]: I0121 10:39:21.838752 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5l2z" event={"ID":"11dd014b-3d8c-4760-bac9-8499d83150ce","Type":"ContainerDied","Data":"5b96281786f97084cb05a503f85765924454f8cab18b186bc08df19c997e36b3"} Jan 21 10:39:21 crc kubenswrapper[5119]: I0121 10:39:21.839045 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5l2z" event={"ID":"11dd014b-3d8c-4760-bac9-8499d83150ce","Type":"ContainerDied","Data":"6e64939eaff9cac600d9a85a8e318563c8fe98659471d78ed44fa33c6f22ccc9"} Jan 21 10:39:21 crc kubenswrapper[5119]: I0121 10:39:21.839067 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e64939eaff9cac600d9a85a8e318563c8fe98659471d78ed44fa33c6f22ccc9" Jan 21 10:39:21 crc kubenswrapper[5119]: I0121 10:39:21.879583 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:21 crc kubenswrapper[5119]: I0121 10:39:21.974423 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-catalog-content\") pod \"11dd014b-3d8c-4760-bac9-8499d83150ce\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " Jan 21 10:39:21 crc kubenswrapper[5119]: I0121 10:39:21.974593 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-utilities\") pod \"11dd014b-3d8c-4760-bac9-8499d83150ce\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " Jan 21 10:39:21 crc kubenswrapper[5119]: I0121 10:39:21.974678 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8chs7\" (UniqueName: \"kubernetes.io/projected/11dd014b-3d8c-4760-bac9-8499d83150ce-kube-api-access-8chs7\") pod \"11dd014b-3d8c-4760-bac9-8499d83150ce\" (UID: \"11dd014b-3d8c-4760-bac9-8499d83150ce\") " Jan 21 10:39:21 crc kubenswrapper[5119]: I0121 10:39:21.976020 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-utilities" (OuterVolumeSpecName: "utilities") pod "11dd014b-3d8c-4760-bac9-8499d83150ce" (UID: "11dd014b-3d8c-4760-bac9-8499d83150ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:39:21 crc kubenswrapper[5119]: I0121 10:39:21.980592 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11dd014b-3d8c-4760-bac9-8499d83150ce-kube-api-access-8chs7" (OuterVolumeSpecName: "kube-api-access-8chs7") pod "11dd014b-3d8c-4760-bac9-8499d83150ce" (UID: "11dd014b-3d8c-4760-bac9-8499d83150ce"). InnerVolumeSpecName "kube-api-access-8chs7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:39:22 crc kubenswrapper[5119]: I0121 10:39:22.077188 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11dd014b-3d8c-4760-bac9-8499d83150ce" (UID: "11dd014b-3d8c-4760-bac9-8499d83150ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:39:22 crc kubenswrapper[5119]: I0121 10:39:22.077815 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:22 crc kubenswrapper[5119]: I0121 10:39:22.077835 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11dd014b-3d8c-4760-bac9-8499d83150ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:22 crc kubenswrapper[5119]: I0121 10:39:22.077844 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8chs7\" (UniqueName: \"kubernetes.io/projected/11dd014b-3d8c-4760-bac9-8499d83150ce-kube-api-access-8chs7\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:22 crc kubenswrapper[5119]: I0121 10:39:22.845097 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5l2z" Jan 21 10:39:22 crc kubenswrapper[5119]: I0121 10:39:22.862328 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5l2z"] Jan 21 10:39:22 crc kubenswrapper[5119]: I0121 10:39:22.869443 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s5l2z"] Jan 21 10:39:24 crc kubenswrapper[5119]: I0121 10:39:24.599098 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11dd014b-3d8c-4760-bac9-8499d83150ce" path="/var/lib/kubelet/pods/11dd014b-3d8c-4760-bac9-8499d83150ce/volumes" Jan 21 10:39:26 crc kubenswrapper[5119]: I0121 10:39:26.591319 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:39:26 crc kubenswrapper[5119]: E0121 10:39:26.591829 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:39:39 crc kubenswrapper[5119]: I0121 10:39:39.590871 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:39:39 crc kubenswrapper[5119]: E0121 10:39:39.591722 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:39:46 crc kubenswrapper[5119]: I0121 10:39:46.502328 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:39:46 crc kubenswrapper[5119]: I0121 10:39:46.504586 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:39:46 crc kubenswrapper[5119]: I0121 10:39:46.506345 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:39:46 crc kubenswrapper[5119]: I0121 10:39:46.508981 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:39:54 crc kubenswrapper[5119]: I0121 10:39:54.597694 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:39:54 crc kubenswrapper[5119]: E0121 10:39:54.598680 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.131011 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483200-ljjdj"] Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132081 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11dd014b-3d8c-4760-bac9-8499d83150ce" containerName="extract-content" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132098 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="11dd014b-3d8c-4760-bac9-8499d83150ce" containerName="extract-content" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132113 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11dd014b-3d8c-4760-bac9-8499d83150ce" containerName="extract-utilities" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132119 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="11dd014b-3d8c-4760-bac9-8499d83150ce" containerName="extract-utilities" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132140 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerName="extract-utilities" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132145 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerName="extract-utilities" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132156 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerName="extract-content" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132161 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerName="extract-content" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132169 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerName="registry-server" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132174 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerName="registry-server" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132186 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11dd014b-3d8c-4760-bac9-8499d83150ce" containerName="registry-server" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132192 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="11dd014b-3d8c-4760-bac9-8499d83150ce" containerName="registry-server" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132303 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="11dd014b-3d8c-4760-bac9-8499d83150ce" containerName="registry-server" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.132315 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="c21c2c47-acb5-41c1-9a77-ac8510ea51fe" containerName="registry-server" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.166164 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483200-ljjdj" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.166058 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483200-ljjdj"] Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.168819 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.168874 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.169321 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.239027 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frm4r\" (UniqueName: \"kubernetes.io/projected/2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a-kube-api-access-frm4r\") pod \"auto-csr-approver-29483200-ljjdj\" (UID: \"2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a\") " pod="openshift-infra/auto-csr-approver-29483200-ljjdj" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.340170 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-frm4r\" (UniqueName: \"kubernetes.io/projected/2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a-kube-api-access-frm4r\") pod \"auto-csr-approver-29483200-ljjdj\" (UID: \"2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a\") " pod="openshift-infra/auto-csr-approver-29483200-ljjdj" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.381248 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-frm4r\" (UniqueName: \"kubernetes.io/projected/2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a-kube-api-access-frm4r\") pod \"auto-csr-approver-29483200-ljjdj\" (UID: \"2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a\") " pod="openshift-infra/auto-csr-approver-29483200-ljjdj" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.486460 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483200-ljjdj" Jan 21 10:40:00 crc kubenswrapper[5119]: I0121 10:40:00.671590 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483200-ljjdj"] Jan 21 10:40:01 crc kubenswrapper[5119]: I0121 10:40:01.146459 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483200-ljjdj" event={"ID":"2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a","Type":"ContainerStarted","Data":"2a199446c44d7bf172ce8646baf248d1204d5551385f2ca53daafac4b506e681"} Jan 21 10:40:02 crc kubenswrapper[5119]: I0121 10:40:02.155976 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483200-ljjdj" event={"ID":"2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a","Type":"ContainerStarted","Data":"dfc7a70a08ead445df8b85a3dc137559970f5bfe79191da04256d017adf8e4f7"} Jan 21 10:40:02 crc kubenswrapper[5119]: I0121 10:40:02.174828 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483200-ljjdj" podStartSLOduration=1.068989869 podStartE2EDuration="2.174805774s" podCreationTimestamp="2026-01-21 10:40:00 +0000 UTC" firstStartedPulling="2026-01-21 10:40:00.693280806 +0000 UTC m=+2716.361372484" lastFinishedPulling="2026-01-21 10:40:01.799096691 +0000 UTC m=+2717.467188389" observedRunningTime="2026-01-21 10:40:02.167089412 +0000 UTC m=+2717.835181090" watchObservedRunningTime="2026-01-21 10:40:02.174805774 +0000 UTC m=+2717.842897452" Jan 21 10:40:03 crc kubenswrapper[5119]: I0121 10:40:03.167061 5119 generic.go:358] "Generic (PLEG): container finished" podID="2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a" containerID="dfc7a70a08ead445df8b85a3dc137559970f5bfe79191da04256d017adf8e4f7" exitCode=0 Jan 21 10:40:03 crc kubenswrapper[5119]: I0121 10:40:03.167300 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483200-ljjdj" event={"ID":"2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a","Type":"ContainerDied","Data":"dfc7a70a08ead445df8b85a3dc137559970f5bfe79191da04256d017adf8e4f7"} Jan 21 10:40:04 crc kubenswrapper[5119]: I0121 10:40:04.481763 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483200-ljjdj" Jan 21 10:40:04 crc kubenswrapper[5119]: I0121 10:40:04.631779 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frm4r\" (UniqueName: \"kubernetes.io/projected/2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a-kube-api-access-frm4r\") pod \"2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a\" (UID: \"2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a\") " Jan 21 10:40:04 crc kubenswrapper[5119]: I0121 10:40:04.638285 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a-kube-api-access-frm4r" (OuterVolumeSpecName: "kube-api-access-frm4r") pod "2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a" (UID: "2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a"). InnerVolumeSpecName "kube-api-access-frm4r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:40:04 crc kubenswrapper[5119]: I0121 10:40:04.734198 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-frm4r\" (UniqueName: \"kubernetes.io/projected/2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a-kube-api-access-frm4r\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:05 crc kubenswrapper[5119]: I0121 10:40:05.209939 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483200-ljjdj" Jan 21 10:40:05 crc kubenswrapper[5119]: I0121 10:40:05.210210 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483200-ljjdj" event={"ID":"2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a","Type":"ContainerDied","Data":"2a199446c44d7bf172ce8646baf248d1204d5551385f2ca53daafac4b506e681"} Jan 21 10:40:05 crc kubenswrapper[5119]: I0121 10:40:05.210368 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a199446c44d7bf172ce8646baf248d1204d5551385f2ca53daafac4b506e681" Jan 21 10:40:05 crc kubenswrapper[5119]: I0121 10:40:05.231969 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483194-t4k25"] Jan 21 10:40:05 crc kubenswrapper[5119]: I0121 10:40:05.237501 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483194-t4k25"] Jan 21 10:40:05 crc kubenswrapper[5119]: I0121 10:40:05.591084 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:40:05 crc kubenswrapper[5119]: E0121 10:40:05.592309 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:40:06 crc kubenswrapper[5119]: I0121 10:40:06.597876 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f687f17a-51f5-4455-86e1-2f55199ed279" path="/var/lib/kubelet/pods/f687f17a-51f5-4455-86e1-2f55199ed279/volumes" Jan 21 10:40:19 crc kubenswrapper[5119]: I0121 10:40:19.591047 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:40:19 crc kubenswrapper[5119]: E0121 10:40:19.591842 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:40:30 crc kubenswrapper[5119]: I0121 10:40:30.590721 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:40:30 crc kubenswrapper[5119]: E0121 10:40:30.591413 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:40:41 crc kubenswrapper[5119]: I0121 10:40:41.592355 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:40:41 crc kubenswrapper[5119]: E0121 10:40:41.594167 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:40:52 crc kubenswrapper[5119]: I0121 10:40:52.692554 5119 scope.go:117] "RemoveContainer" containerID="ca7e82efc144e2fa922ae479cb25773646ec130dbf7f3ba01303684c495617fd" Jan 21 10:40:56 crc kubenswrapper[5119]: I0121 10:40:56.594946 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:40:56 crc kubenswrapper[5119]: E0121 10:40:56.595670 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:40:57 crc kubenswrapper[5119]: I0121 10:40:57.624272 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-pfp5r"] Jan 21 10:40:57 crc kubenswrapper[5119]: I0121 10:40:57.625422 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a" containerName="oc" Jan 21 10:40:57 crc kubenswrapper[5119]: I0121 10:40:57.625445 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a" containerName="oc" Jan 21 10:40:57 crc kubenswrapper[5119]: I0121 10:40:57.625681 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a" containerName="oc" Jan 21 10:40:57 crc kubenswrapper[5119]: I0121 10:40:57.634319 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pfp5r" Jan 21 10:40:57 crc kubenswrapper[5119]: I0121 10:40:57.635514 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-pfp5r"] Jan 21 10:40:57 crc kubenswrapper[5119]: I0121 10:40:57.683938 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk9hg\" (UniqueName: \"kubernetes.io/projected/cf3ed12e-f478-4fae-9a05-8b92d7efb693-kube-api-access-dk9hg\") pod \"infrawatch-operators-pfp5r\" (UID: \"cf3ed12e-f478-4fae-9a05-8b92d7efb693\") " pod="service-telemetry/infrawatch-operators-pfp5r" Jan 21 10:40:57 crc kubenswrapper[5119]: I0121 10:40:57.785198 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dk9hg\" (UniqueName: \"kubernetes.io/projected/cf3ed12e-f478-4fae-9a05-8b92d7efb693-kube-api-access-dk9hg\") pod \"infrawatch-operators-pfp5r\" (UID: \"cf3ed12e-f478-4fae-9a05-8b92d7efb693\") " pod="service-telemetry/infrawatch-operators-pfp5r" Jan 21 10:40:57 crc kubenswrapper[5119]: I0121 10:40:57.809928 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk9hg\" (UniqueName: \"kubernetes.io/projected/cf3ed12e-f478-4fae-9a05-8b92d7efb693-kube-api-access-dk9hg\") pod \"infrawatch-operators-pfp5r\" (UID: \"cf3ed12e-f478-4fae-9a05-8b92d7efb693\") " pod="service-telemetry/infrawatch-operators-pfp5r" Jan 21 10:40:57 crc kubenswrapper[5119]: I0121 10:40:57.957473 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pfp5r" Jan 21 10:40:58 crc kubenswrapper[5119]: I0121 10:40:58.140041 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-pfp5r"] Jan 21 10:40:58 crc kubenswrapper[5119]: I0121 10:40:58.142572 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:40:59 crc kubenswrapper[5119]: I0121 10:40:59.043318 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pfp5r" event={"ID":"cf3ed12e-f478-4fae-9a05-8b92d7efb693","Type":"ContainerStarted","Data":"e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5"} Jan 21 10:40:59 crc kubenswrapper[5119]: I0121 10:40:59.043690 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pfp5r" event={"ID":"cf3ed12e-f478-4fae-9a05-8b92d7efb693","Type":"ContainerStarted","Data":"da7b98caaf1916ead97d62b859e686ca82f72e1a6fb67cb7f432e95d31ac8014"} Jan 21 10:40:59 crc kubenswrapper[5119]: I0121 10:40:59.061393 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-pfp5r" podStartSLOduration=1.573234534 podStartE2EDuration="2.061375312s" podCreationTimestamp="2026-01-21 10:40:57 +0000 UTC" firstStartedPulling="2026-01-21 10:40:58.142807326 +0000 UTC m=+2773.810899004" lastFinishedPulling="2026-01-21 10:40:58.630948104 +0000 UTC m=+2774.299039782" observedRunningTime="2026-01-21 10:40:59.058711699 +0000 UTC m=+2774.726803377" watchObservedRunningTime="2026-01-21 10:40:59.061375312 +0000 UTC m=+2774.729466990" Jan 21 10:41:07 crc kubenswrapper[5119]: I0121 10:41:07.957802 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-pfp5r" Jan 21 10:41:07 crc kubenswrapper[5119]: I0121 10:41:07.958504 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-pfp5r" Jan 21 10:41:07 crc kubenswrapper[5119]: I0121 10:41:07.991354 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-pfp5r" Jan 21 10:41:08 crc kubenswrapper[5119]: I0121 10:41:08.148572 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-pfp5r" Jan 21 10:41:08 crc kubenswrapper[5119]: I0121 10:41:08.218114 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-pfp5r"] Jan 21 10:41:10 crc kubenswrapper[5119]: I0121 10:41:10.135868 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-pfp5r" podUID="cf3ed12e-f478-4fae-9a05-8b92d7efb693" containerName="registry-server" containerID="cri-o://e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5" gracePeriod=2 Jan 21 10:41:10 crc kubenswrapper[5119]: I0121 10:41:10.984150 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pfp5r" Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.075618 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk9hg\" (UniqueName: \"kubernetes.io/projected/cf3ed12e-f478-4fae-9a05-8b92d7efb693-kube-api-access-dk9hg\") pod \"cf3ed12e-f478-4fae-9a05-8b92d7efb693\" (UID: \"cf3ed12e-f478-4fae-9a05-8b92d7efb693\") " Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.081557 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf3ed12e-f478-4fae-9a05-8b92d7efb693-kube-api-access-dk9hg" (OuterVolumeSpecName: "kube-api-access-dk9hg") pod "cf3ed12e-f478-4fae-9a05-8b92d7efb693" (UID: "cf3ed12e-f478-4fae-9a05-8b92d7efb693"). InnerVolumeSpecName "kube-api-access-dk9hg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.147359 5119 generic.go:358] "Generic (PLEG): container finished" podID="cf3ed12e-f478-4fae-9a05-8b92d7efb693" containerID="e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5" exitCode=0 Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.147443 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pfp5r" event={"ID":"cf3ed12e-f478-4fae-9a05-8b92d7efb693","Type":"ContainerDied","Data":"e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5"} Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.147468 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pfp5r" event={"ID":"cf3ed12e-f478-4fae-9a05-8b92d7efb693","Type":"ContainerDied","Data":"da7b98caaf1916ead97d62b859e686ca82f72e1a6fb67cb7f432e95d31ac8014"} Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.147473 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pfp5r" Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.147484 5119 scope.go:117] "RemoveContainer" containerID="e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5" Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.164557 5119 scope.go:117] "RemoveContainer" containerID="e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5" Jan 21 10:41:11 crc kubenswrapper[5119]: E0121 10:41:11.164849 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5\": container with ID starting with e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5 not found: ID does not exist" containerID="e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5" Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.164943 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5"} err="failed to get container status \"e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5\": rpc error: code = NotFound desc = could not find container \"e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5\": container with ID starting with e6c62fdbd4220dbccb055cfd03e3c4cb7383240b0784867754b3aa89044284f5 not found: ID does not exist" Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.178853 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dk9hg\" (UniqueName: \"kubernetes.io/projected/cf3ed12e-f478-4fae-9a05-8b92d7efb693-kube-api-access-dk9hg\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.181387 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-pfp5r"] Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.187035 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-pfp5r"] Jan 21 10:41:11 crc kubenswrapper[5119]: I0121 10:41:11.590595 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:41:11 crc kubenswrapper[5119]: E0121 10:41:11.590896 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:41:12 crc kubenswrapper[5119]: I0121 10:41:12.600300 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf3ed12e-f478-4fae-9a05-8b92d7efb693" path="/var/lib/kubelet/pods/cf3ed12e-f478-4fae-9a05-8b92d7efb693/volumes" Jan 21 10:41:26 crc kubenswrapper[5119]: I0121 10:41:26.593871 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:41:27 crc kubenswrapper[5119]: I0121 10:41:27.261095 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"8c6b266358486376a0f62632d71df43c8b4cdbd094647fbf4ce48a482ef4cbb9"} Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.131812 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483202-fpc66"] Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.133085 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf3ed12e-f478-4fae-9a05-8b92d7efb693" containerName="registry-server" Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.133103 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3ed12e-f478-4fae-9a05-8b92d7efb693" containerName="registry-server" Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.133255 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf3ed12e-f478-4fae-9a05-8b92d7efb693" containerName="registry-server" Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.187970 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483202-fpc66"] Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.188117 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483202-fpc66" Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.190501 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.190744 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.191251 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.302557 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4chcc\" (UniqueName: \"kubernetes.io/projected/559c4288-2bd8-4527-80c7-5928414f4caf-kube-api-access-4chcc\") pod \"auto-csr-approver-29483202-fpc66\" (UID: \"559c4288-2bd8-4527-80c7-5928414f4caf\") " pod="openshift-infra/auto-csr-approver-29483202-fpc66" Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.404426 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4chcc\" (UniqueName: \"kubernetes.io/projected/559c4288-2bd8-4527-80c7-5928414f4caf-kube-api-access-4chcc\") pod \"auto-csr-approver-29483202-fpc66\" (UID: \"559c4288-2bd8-4527-80c7-5928414f4caf\") " pod="openshift-infra/auto-csr-approver-29483202-fpc66" Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.424364 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4chcc\" (UniqueName: \"kubernetes.io/projected/559c4288-2bd8-4527-80c7-5928414f4caf-kube-api-access-4chcc\") pod \"auto-csr-approver-29483202-fpc66\" (UID: \"559c4288-2bd8-4527-80c7-5928414f4caf\") " pod="openshift-infra/auto-csr-approver-29483202-fpc66" Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.505521 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483202-fpc66" Jan 21 10:42:00 crc kubenswrapper[5119]: I0121 10:42:00.898657 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483202-fpc66"] Jan 21 10:42:01 crc kubenswrapper[5119]: I0121 10:42:01.512914 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483202-fpc66" event={"ID":"559c4288-2bd8-4527-80c7-5928414f4caf","Type":"ContainerStarted","Data":"5072b4c4125c6c76a9aac9483ae6d2a7ad2b84de48776e836eb8e5648f59fa6d"} Jan 21 10:42:03 crc kubenswrapper[5119]: I0121 10:42:03.533133 5119 generic.go:358] "Generic (PLEG): container finished" podID="559c4288-2bd8-4527-80c7-5928414f4caf" containerID="c6445e50aefb830e8dd643e4358623fa4bb7a21773f47ae7c368475223d48453" exitCode=0 Jan 21 10:42:03 crc kubenswrapper[5119]: I0121 10:42:03.533257 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483202-fpc66" event={"ID":"559c4288-2bd8-4527-80c7-5928414f4caf","Type":"ContainerDied","Data":"c6445e50aefb830e8dd643e4358623fa4bb7a21773f47ae7c368475223d48453"} Jan 21 10:42:04 crc kubenswrapper[5119]: I0121 10:42:04.753619 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483202-fpc66" Jan 21 10:42:04 crc kubenswrapper[5119]: I0121 10:42:04.773121 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4chcc\" (UniqueName: \"kubernetes.io/projected/559c4288-2bd8-4527-80c7-5928414f4caf-kube-api-access-4chcc\") pod \"559c4288-2bd8-4527-80c7-5928414f4caf\" (UID: \"559c4288-2bd8-4527-80c7-5928414f4caf\") " Jan 21 10:42:04 crc kubenswrapper[5119]: I0121 10:42:04.779523 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/559c4288-2bd8-4527-80c7-5928414f4caf-kube-api-access-4chcc" (OuterVolumeSpecName: "kube-api-access-4chcc") pod "559c4288-2bd8-4527-80c7-5928414f4caf" (UID: "559c4288-2bd8-4527-80c7-5928414f4caf"). InnerVolumeSpecName "kube-api-access-4chcc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:42:04 crc kubenswrapper[5119]: I0121 10:42:04.874840 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4chcc\" (UniqueName: \"kubernetes.io/projected/559c4288-2bd8-4527-80c7-5928414f4caf-kube-api-access-4chcc\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:05 crc kubenswrapper[5119]: I0121 10:42:05.548943 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483202-fpc66" event={"ID":"559c4288-2bd8-4527-80c7-5928414f4caf","Type":"ContainerDied","Data":"5072b4c4125c6c76a9aac9483ae6d2a7ad2b84de48776e836eb8e5648f59fa6d"} Jan 21 10:42:05 crc kubenswrapper[5119]: I0121 10:42:05.548981 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5072b4c4125c6c76a9aac9483ae6d2a7ad2b84de48776e836eb8e5648f59fa6d" Jan 21 10:42:05 crc kubenswrapper[5119]: I0121 10:42:05.549073 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483202-fpc66" Jan 21 10:42:05 crc kubenswrapper[5119]: I0121 10:42:05.821435 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483196-8bpq9"] Jan 21 10:42:05 crc kubenswrapper[5119]: I0121 10:42:05.829667 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483196-8bpq9"] Jan 21 10:42:06 crc kubenswrapper[5119]: I0121 10:42:06.598695 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae15fc52-c0a6-44d6-9d27-494edd4f4f51" path="/var/lib/kubelet/pods/ae15fc52-c0a6-44d6-9d27-494edd4f4f51/volumes" Jan 21 10:42:52 crc kubenswrapper[5119]: I0121 10:42:52.833040 5119 scope.go:117] "RemoveContainer" containerID="cc910c6f352c1246686d27afed13fa91efb0f1244c60e27f121755f9d4a90b90" Jan 21 10:43:03 crc kubenswrapper[5119]: I0121 10:43:03.961140 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8wn2s"] Jan 21 10:43:03 crc kubenswrapper[5119]: I0121 10:43:03.962501 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="559c4288-2bd8-4527-80c7-5928414f4caf" containerName="oc" Jan 21 10:43:03 crc kubenswrapper[5119]: I0121 10:43:03.962530 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="559c4288-2bd8-4527-80c7-5928414f4caf" containerName="oc" Jan 21 10:43:03 crc kubenswrapper[5119]: I0121 10:43:03.962684 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="559c4288-2bd8-4527-80c7-5928414f4caf" containerName="oc" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.110039 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8wn2s"] Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.110207 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.195394 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8j2d\" (UniqueName: \"kubernetes.io/projected/fa0f4cff-a63b-438f-86d1-80d9b562479f-kube-api-access-f8j2d\") pod \"certified-operators-8wn2s\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.195463 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-utilities\") pod \"certified-operators-8wn2s\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.195503 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-catalog-content\") pod \"certified-operators-8wn2s\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.296779 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-catalog-content\") pod \"certified-operators-8wn2s\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.297289 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-catalog-content\") pod \"certified-operators-8wn2s\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.297511 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f8j2d\" (UniqueName: \"kubernetes.io/projected/fa0f4cff-a63b-438f-86d1-80d9b562479f-kube-api-access-f8j2d\") pod \"certified-operators-8wn2s\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.297581 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-utilities\") pod \"certified-operators-8wn2s\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.298079 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-utilities\") pod \"certified-operators-8wn2s\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.333547 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8j2d\" (UniqueName: \"kubernetes.io/projected/fa0f4cff-a63b-438f-86d1-80d9b562479f-kube-api-access-f8j2d\") pod \"certified-operators-8wn2s\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.427148 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:04 crc kubenswrapper[5119]: I0121 10:43:04.641835 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8wn2s"] Jan 21 10:43:05 crc kubenswrapper[5119]: I0121 10:43:05.022479 5119 generic.go:358] "Generic (PLEG): container finished" podID="fa0f4cff-a63b-438f-86d1-80d9b562479f" containerID="e8b2b99a6a50ad181f32f206560b768fd9cc25ed5309cd0d87117e7827bfef3a" exitCode=0 Jan 21 10:43:05 crc kubenswrapper[5119]: I0121 10:43:05.022676 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wn2s" event={"ID":"fa0f4cff-a63b-438f-86d1-80d9b562479f","Type":"ContainerDied","Data":"e8b2b99a6a50ad181f32f206560b768fd9cc25ed5309cd0d87117e7827bfef3a"} Jan 21 10:43:05 crc kubenswrapper[5119]: I0121 10:43:05.023855 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wn2s" event={"ID":"fa0f4cff-a63b-438f-86d1-80d9b562479f","Type":"ContainerStarted","Data":"3ae77de1b963f88f44c2c64011b9673f2a7b28417f63fd207aa1bd7be22b312c"} Jan 21 10:43:09 crc kubenswrapper[5119]: I0121 10:43:09.066292 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wn2s" event={"ID":"fa0f4cff-a63b-438f-86d1-80d9b562479f","Type":"ContainerStarted","Data":"ff617835eee6c5235b17b450ffda997900a3add80637c3d2bd0c38d39173b833"} Jan 21 10:43:10 crc kubenswrapper[5119]: I0121 10:43:10.075449 5119 generic.go:358] "Generic (PLEG): container finished" podID="fa0f4cff-a63b-438f-86d1-80d9b562479f" containerID="ff617835eee6c5235b17b450ffda997900a3add80637c3d2bd0c38d39173b833" exitCode=0 Jan 21 10:43:10 crc kubenswrapper[5119]: I0121 10:43:10.075500 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wn2s" event={"ID":"fa0f4cff-a63b-438f-86d1-80d9b562479f","Type":"ContainerDied","Data":"ff617835eee6c5235b17b450ffda997900a3add80637c3d2bd0c38d39173b833"} Jan 21 10:43:11 crc kubenswrapper[5119]: I0121 10:43:11.085285 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wn2s" event={"ID":"fa0f4cff-a63b-438f-86d1-80d9b562479f","Type":"ContainerStarted","Data":"423a84b5e841d2cb496b1cf7ad3aa8eaa3f9f6dafc973988b55442b48b7532ac"} Jan 21 10:43:11 crc kubenswrapper[5119]: I0121 10:43:11.107318 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8wn2s" podStartSLOduration=4.23620693 podStartE2EDuration="8.107300803s" podCreationTimestamp="2026-01-21 10:43:03 +0000 UTC" firstStartedPulling="2026-01-21 10:43:05.023436948 +0000 UTC m=+2900.691528616" lastFinishedPulling="2026-01-21 10:43:08.894530811 +0000 UTC m=+2904.562622489" observedRunningTime="2026-01-21 10:43:11.10388813 +0000 UTC m=+2906.771979808" watchObservedRunningTime="2026-01-21 10:43:11.107300803 +0000 UTC m=+2906.775392481" Jan 21 10:43:14 crc kubenswrapper[5119]: I0121 10:43:14.429039 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:14 crc kubenswrapper[5119]: I0121 10:43:14.429369 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:14 crc kubenswrapper[5119]: I0121 10:43:14.488566 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:15 crc kubenswrapper[5119]: I0121 10:43:15.151010 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:15 crc kubenswrapper[5119]: I0121 10:43:15.193261 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8wn2s"] Jan 21 10:43:17 crc kubenswrapper[5119]: I0121 10:43:17.126796 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8wn2s" podUID="fa0f4cff-a63b-438f-86d1-80d9b562479f" containerName="registry-server" containerID="cri-o://423a84b5e841d2cb496b1cf7ad3aa8eaa3f9f6dafc973988b55442b48b7532ac" gracePeriod=2 Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.149302 5119 generic.go:358] "Generic (PLEG): container finished" podID="fa0f4cff-a63b-438f-86d1-80d9b562479f" containerID="423a84b5e841d2cb496b1cf7ad3aa8eaa3f9f6dafc973988b55442b48b7532ac" exitCode=0 Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.149381 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wn2s" event={"ID":"fa0f4cff-a63b-438f-86d1-80d9b562479f","Type":"ContainerDied","Data":"423a84b5e841d2cb496b1cf7ad3aa8eaa3f9f6dafc973988b55442b48b7532ac"} Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.468353 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.535429 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8j2d\" (UniqueName: \"kubernetes.io/projected/fa0f4cff-a63b-438f-86d1-80d9b562479f-kube-api-access-f8j2d\") pod \"fa0f4cff-a63b-438f-86d1-80d9b562479f\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.535520 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-utilities\") pod \"fa0f4cff-a63b-438f-86d1-80d9b562479f\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.535678 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-catalog-content\") pod \"fa0f4cff-a63b-438f-86d1-80d9b562479f\" (UID: \"fa0f4cff-a63b-438f-86d1-80d9b562479f\") " Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.537531 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-utilities" (OuterVolumeSpecName: "utilities") pod "fa0f4cff-a63b-438f-86d1-80d9b562479f" (UID: "fa0f4cff-a63b-438f-86d1-80d9b562479f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.541247 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0f4cff-a63b-438f-86d1-80d9b562479f-kube-api-access-f8j2d" (OuterVolumeSpecName: "kube-api-access-f8j2d") pod "fa0f4cff-a63b-438f-86d1-80d9b562479f" (UID: "fa0f4cff-a63b-438f-86d1-80d9b562479f"). InnerVolumeSpecName "kube-api-access-f8j2d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.580874 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa0f4cff-a63b-438f-86d1-80d9b562479f" (UID: "fa0f4cff-a63b-438f-86d1-80d9b562479f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.637285 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.637313 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f8j2d\" (UniqueName: \"kubernetes.io/projected/fa0f4cff-a63b-438f-86d1-80d9b562479f-kube-api-access-f8j2d\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:20 crc kubenswrapper[5119]: I0121 10:43:20.637322 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0f4cff-a63b-438f-86d1-80d9b562479f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:21 crc kubenswrapper[5119]: I0121 10:43:21.160346 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wn2s" Jan 21 10:43:21 crc kubenswrapper[5119]: I0121 10:43:21.160345 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wn2s" event={"ID":"fa0f4cff-a63b-438f-86d1-80d9b562479f","Type":"ContainerDied","Data":"3ae77de1b963f88f44c2c64011b9673f2a7b28417f63fd207aa1bd7be22b312c"} Jan 21 10:43:21 crc kubenswrapper[5119]: I0121 10:43:21.160883 5119 scope.go:117] "RemoveContainer" containerID="423a84b5e841d2cb496b1cf7ad3aa8eaa3f9f6dafc973988b55442b48b7532ac" Jan 21 10:43:21 crc kubenswrapper[5119]: I0121 10:43:21.182755 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8wn2s"] Jan 21 10:43:21 crc kubenswrapper[5119]: I0121 10:43:21.194149 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8wn2s"] Jan 21 10:43:21 crc kubenswrapper[5119]: I0121 10:43:21.200304 5119 scope.go:117] "RemoveContainer" containerID="ff617835eee6c5235b17b450ffda997900a3add80637c3d2bd0c38d39173b833" Jan 21 10:43:21 crc kubenswrapper[5119]: I0121 10:43:21.219456 5119 scope.go:117] "RemoveContainer" containerID="e8b2b99a6a50ad181f32f206560b768fd9cc25ed5309cd0d87117e7827bfef3a" Jan 21 10:43:22 crc kubenswrapper[5119]: I0121 10:43:22.599591 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0f4cff-a63b-438f-86d1-80d9b562479f" path="/var/lib/kubelet/pods/fa0f4cff-a63b-438f-86d1-80d9b562479f/volumes" Jan 21 10:43:49 crc kubenswrapper[5119]: I0121 10:43:49.918935 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:43:49 crc kubenswrapper[5119]: I0121 10:43:49.919541 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.140725 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483204-bpbjq"] Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.142650 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0f4cff-a63b-438f-86d1-80d9b562479f" containerName="extract-content" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.142676 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0f4cff-a63b-438f-86d1-80d9b562479f" containerName="extract-content" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.142711 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0f4cff-a63b-438f-86d1-80d9b562479f" containerName="extract-utilities" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.142721 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0f4cff-a63b-438f-86d1-80d9b562479f" containerName="extract-utilities" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.142757 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0f4cff-a63b-438f-86d1-80d9b562479f" containerName="registry-server" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.142773 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0f4cff-a63b-438f-86d1-80d9b562479f" containerName="registry-server" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.143135 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="fa0f4cff-a63b-438f-86d1-80d9b562479f" containerName="registry-server" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.212042 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483204-bpbjq" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.211914 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483204-bpbjq"] Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.215275 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.215538 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.215782 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.341156 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g9l7\" (UniqueName: \"kubernetes.io/projected/0b3e790a-dae4-4bb2-a739-89114727281c-kube-api-access-2g9l7\") pod \"auto-csr-approver-29483204-bpbjq\" (UID: \"0b3e790a-dae4-4bb2-a739-89114727281c\") " pod="openshift-infra/auto-csr-approver-29483204-bpbjq" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.443022 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2g9l7\" (UniqueName: \"kubernetes.io/projected/0b3e790a-dae4-4bb2-a739-89114727281c-kube-api-access-2g9l7\") pod \"auto-csr-approver-29483204-bpbjq\" (UID: \"0b3e790a-dae4-4bb2-a739-89114727281c\") " pod="openshift-infra/auto-csr-approver-29483204-bpbjq" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.465576 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g9l7\" (UniqueName: \"kubernetes.io/projected/0b3e790a-dae4-4bb2-a739-89114727281c-kube-api-access-2g9l7\") pod \"auto-csr-approver-29483204-bpbjq\" (UID: \"0b3e790a-dae4-4bb2-a739-89114727281c\") " pod="openshift-infra/auto-csr-approver-29483204-bpbjq" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.530687 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483204-bpbjq" Jan 21 10:44:00 crc kubenswrapper[5119]: I0121 10:44:00.986616 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483204-bpbjq"] Jan 21 10:44:01 crc kubenswrapper[5119]: I0121 10:44:01.496898 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483204-bpbjq" event={"ID":"0b3e790a-dae4-4bb2-a739-89114727281c","Type":"ContainerStarted","Data":"c757f4406b92b8bd722bcaf24a9cf31efeffb8559ea56894d7a79d126f7d2f31"} Jan 21 10:44:02 crc kubenswrapper[5119]: I0121 10:44:02.505111 5119 generic.go:358] "Generic (PLEG): container finished" podID="0b3e790a-dae4-4bb2-a739-89114727281c" containerID="3a32c29aa565ba10f984197905fb045fa914498b3398f5cae3798b21f3815abf" exitCode=0 Jan 21 10:44:02 crc kubenswrapper[5119]: I0121 10:44:02.505318 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483204-bpbjq" event={"ID":"0b3e790a-dae4-4bb2-a739-89114727281c","Type":"ContainerDied","Data":"3a32c29aa565ba10f984197905fb045fa914498b3398f5cae3798b21f3815abf"} Jan 21 10:44:03 crc kubenswrapper[5119]: I0121 10:44:03.739787 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483204-bpbjq" Jan 21 10:44:03 crc kubenswrapper[5119]: I0121 10:44:03.785258 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g9l7\" (UniqueName: \"kubernetes.io/projected/0b3e790a-dae4-4bb2-a739-89114727281c-kube-api-access-2g9l7\") pod \"0b3e790a-dae4-4bb2-a739-89114727281c\" (UID: \"0b3e790a-dae4-4bb2-a739-89114727281c\") " Jan 21 10:44:03 crc kubenswrapper[5119]: I0121 10:44:03.792505 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3e790a-dae4-4bb2-a739-89114727281c-kube-api-access-2g9l7" (OuterVolumeSpecName: "kube-api-access-2g9l7") pod "0b3e790a-dae4-4bb2-a739-89114727281c" (UID: "0b3e790a-dae4-4bb2-a739-89114727281c"). InnerVolumeSpecName "kube-api-access-2g9l7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:44:03 crc kubenswrapper[5119]: I0121 10:44:03.886807 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2g9l7\" (UniqueName: \"kubernetes.io/projected/0b3e790a-dae4-4bb2-a739-89114727281c-kube-api-access-2g9l7\") on node \"crc\" DevicePath \"\"" Jan 21 10:44:04 crc kubenswrapper[5119]: I0121 10:44:04.519148 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483204-bpbjq" Jan 21 10:44:04 crc kubenswrapper[5119]: I0121 10:44:04.519165 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483204-bpbjq" event={"ID":"0b3e790a-dae4-4bb2-a739-89114727281c","Type":"ContainerDied","Data":"c757f4406b92b8bd722bcaf24a9cf31efeffb8559ea56894d7a79d126f7d2f31"} Jan 21 10:44:04 crc kubenswrapper[5119]: I0121 10:44:04.519191 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c757f4406b92b8bd722bcaf24a9cf31efeffb8559ea56894d7a79d126f7d2f31" Jan 21 10:44:04 crc kubenswrapper[5119]: I0121 10:44:04.793995 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483198-fg4gr"] Jan 21 10:44:04 crc kubenswrapper[5119]: I0121 10:44:04.810569 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483198-fg4gr"] Jan 21 10:44:06 crc kubenswrapper[5119]: I0121 10:44:06.603239 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4b01156-a29d-4fa8-ac51-e199710ceee8" path="/var/lib/kubelet/pods/e4b01156-a29d-4fa8-ac51-e199710ceee8/volumes" Jan 21 10:44:19 crc kubenswrapper[5119]: I0121 10:44:19.919409 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:44:19 crc kubenswrapper[5119]: I0121 10:44:19.920023 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:44:46 crc kubenswrapper[5119]: I0121 10:44:46.606219 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:44:46 crc kubenswrapper[5119]: I0121 10:44:46.607323 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:44:46 crc kubenswrapper[5119]: I0121 10:44:46.610351 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:44:46 crc kubenswrapper[5119]: I0121 10:44:46.610561 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:44:49 crc kubenswrapper[5119]: I0121 10:44:49.919541 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:44:49 crc kubenswrapper[5119]: I0121 10:44:49.919980 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:44:49 crc kubenswrapper[5119]: I0121 10:44:49.920052 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:44:49 crc kubenswrapper[5119]: I0121 10:44:49.921141 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c6b266358486376a0f62632d71df43c8b4cdbd094647fbf4ce48a482ef4cbb9"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:44:49 crc kubenswrapper[5119]: I0121 10:44:49.921273 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://8c6b266358486376a0f62632d71df43c8b4cdbd094647fbf4ce48a482ef4cbb9" gracePeriod=600 Jan 21 10:44:50 crc kubenswrapper[5119]: I0121 10:44:50.946577 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="8c6b266358486376a0f62632d71df43c8b4cdbd094647fbf4ce48a482ef4cbb9" exitCode=0 Jan 21 10:44:50 crc kubenswrapper[5119]: I0121 10:44:50.946642 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"8c6b266358486376a0f62632d71df43c8b4cdbd094647fbf4ce48a482ef4cbb9"} Jan 21 10:44:50 crc kubenswrapper[5119]: I0121 10:44:50.947227 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f"} Jan 21 10:44:50 crc kubenswrapper[5119]: I0121 10:44:50.947251 5119 scope.go:117] "RemoveContainer" containerID="73bddcb64083531b50226595722784d64d95d2bc702addc9793ddeca381cc5d7" Jan 21 10:44:52 crc kubenswrapper[5119]: I0121 10:44:52.976290 5119 scope.go:117] "RemoveContainer" containerID="9c0277099616d067d9a2fec87ce06e022aab798b649da418c42cafd6ba326e4c" Jan 21 10:45:00 crc kubenswrapper[5119]: I0121 10:45:00.139945 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv"] Jan 21 10:45:00 crc kubenswrapper[5119]: I0121 10:45:00.141075 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b3e790a-dae4-4bb2-a739-89114727281c" containerName="oc" Jan 21 10:45:00 crc kubenswrapper[5119]: I0121 10:45:00.141101 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3e790a-dae4-4bb2-a739-89114727281c" containerName="oc" Jan 21 10:45:00 crc kubenswrapper[5119]: I0121 10:45:00.141254 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="0b3e790a-dae4-4bb2-a739-89114727281c" containerName="oc" Jan 21 10:45:00 crc kubenswrapper[5119]: I0121 10:45:00.835398 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:00 crc kubenswrapper[5119]: I0121 10:45:00.847101 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 10:45:00 crc kubenswrapper[5119]: I0121 10:45:00.847217 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv"] Jan 21 10:45:00 crc kubenswrapper[5119]: I0121 10:45:00.847224 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 10:45:00 crc kubenswrapper[5119]: I0121 10:45:00.962078 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-secret-volume\") pod \"collect-profiles-29483205-jdspv\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:00 crc kubenswrapper[5119]: I0121 10:45:00.962455 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-config-volume\") pod \"collect-profiles-29483205-jdspv\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:00 crc kubenswrapper[5119]: I0121 10:45:00.962514 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x46vj\" (UniqueName: \"kubernetes.io/projected/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-kube-api-access-x46vj\") pod \"collect-profiles-29483205-jdspv\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:01 crc kubenswrapper[5119]: I0121 10:45:01.064304 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-secret-volume\") pod \"collect-profiles-29483205-jdspv\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:01 crc kubenswrapper[5119]: I0121 10:45:01.064407 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-config-volume\") pod \"collect-profiles-29483205-jdspv\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:01 crc kubenswrapper[5119]: I0121 10:45:01.064431 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x46vj\" (UniqueName: \"kubernetes.io/projected/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-kube-api-access-x46vj\") pod \"collect-profiles-29483205-jdspv\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:01 crc kubenswrapper[5119]: I0121 10:45:01.065974 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-config-volume\") pod \"collect-profiles-29483205-jdspv\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:01 crc kubenswrapper[5119]: I0121 10:45:01.072905 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-secret-volume\") pod \"collect-profiles-29483205-jdspv\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:01 crc kubenswrapper[5119]: I0121 10:45:01.082135 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x46vj\" (UniqueName: \"kubernetes.io/projected/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-kube-api-access-x46vj\") pod \"collect-profiles-29483205-jdspv\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:01 crc kubenswrapper[5119]: I0121 10:45:01.176792 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:01 crc kubenswrapper[5119]: I0121 10:45:01.574841 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv"] Jan 21 10:45:02 crc kubenswrapper[5119]: I0121 10:45:02.050782 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" event={"ID":"cce77d8a-2f31-4d99-8c60-490c5eb93bc5","Type":"ContainerStarted","Data":"3fb306a4ebb3fff933e25d7f67f62294992b73b3e5b917d79f62d4de15c8e6e2"} Jan 21 10:45:02 crc kubenswrapper[5119]: I0121 10:45:02.051071 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" event={"ID":"cce77d8a-2f31-4d99-8c60-490c5eb93bc5","Type":"ContainerStarted","Data":"ad173ba421981ce273fda31e61b2de761d1eed4bd5bf577728921714dae675f9"} Jan 21 10:45:02 crc kubenswrapper[5119]: I0121 10:45:02.066558 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" podStartSLOduration=2.066522533 podStartE2EDuration="2.066522533s" podCreationTimestamp="2026-01-21 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:45:02.062922085 +0000 UTC m=+3017.731013773" watchObservedRunningTime="2026-01-21 10:45:02.066522533 +0000 UTC m=+3017.734614211" Jan 21 10:45:03 crc kubenswrapper[5119]: I0121 10:45:03.060461 5119 generic.go:358] "Generic (PLEG): container finished" podID="cce77d8a-2f31-4d99-8c60-490c5eb93bc5" containerID="3fb306a4ebb3fff933e25d7f67f62294992b73b3e5b917d79f62d4de15c8e6e2" exitCode=0 Jan 21 10:45:03 crc kubenswrapper[5119]: I0121 10:45:03.060543 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" event={"ID":"cce77d8a-2f31-4d99-8c60-490c5eb93bc5","Type":"ContainerDied","Data":"3fb306a4ebb3fff933e25d7f67f62294992b73b3e5b917d79f62d4de15c8e6e2"} Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.366896 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.407367 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x46vj\" (UniqueName: \"kubernetes.io/projected/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-kube-api-access-x46vj\") pod \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.407450 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-secret-volume\") pod \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.407987 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-config-volume\") pod \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\" (UID: \"cce77d8a-2f31-4d99-8c60-490c5eb93bc5\") " Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.409214 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-config-volume" (OuterVolumeSpecName: "config-volume") pod "cce77d8a-2f31-4d99-8c60-490c5eb93bc5" (UID: "cce77d8a-2f31-4d99-8c60-490c5eb93bc5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.415154 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-kube-api-access-x46vj" (OuterVolumeSpecName: "kube-api-access-x46vj") pod "cce77d8a-2f31-4d99-8c60-490c5eb93bc5" (UID: "cce77d8a-2f31-4d99-8c60-490c5eb93bc5"). InnerVolumeSpecName "kube-api-access-x46vj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.415180 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cce77d8a-2f31-4d99-8c60-490c5eb93bc5" (UID: "cce77d8a-2f31-4d99-8c60-490c5eb93bc5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.511072 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x46vj\" (UniqueName: \"kubernetes.io/projected/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-kube-api-access-x46vj\") on node \"crc\" DevicePath \"\"" Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.511426 5119 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.511440 5119 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cce77d8a-2f31-4d99-8c60-490c5eb93bc5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.632251 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns"] Jan 21 10:45:04 crc kubenswrapper[5119]: I0121 10:45:04.644445 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-xpdns"] Jan 21 10:45:05 crc kubenswrapper[5119]: I0121 10:45:05.076482 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" event={"ID":"cce77d8a-2f31-4d99-8c60-490c5eb93bc5","Type":"ContainerDied","Data":"ad173ba421981ce273fda31e61b2de761d1eed4bd5bf577728921714dae675f9"} Jan 21 10:45:05 crc kubenswrapper[5119]: I0121 10:45:05.076512 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-jdspv" Jan 21 10:45:05 crc kubenswrapper[5119]: I0121 10:45:05.076531 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad173ba421981ce273fda31e61b2de761d1eed4bd5bf577728921714dae675f9" Jan 21 10:45:06 crc kubenswrapper[5119]: I0121 10:45:06.599504 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e21db20a-6f9d-4663-bbf3-8e729ec4774f" path="/var/lib/kubelet/pods/e21db20a-6f9d-4663-bbf3-8e729ec4774f/volumes" Jan 21 10:45:53 crc kubenswrapper[5119]: I0121 10:45:53.094515 5119 scope.go:117] "RemoveContainer" containerID="5b96281786f97084cb05a503f85765924454f8cab18b186bc08df19c997e36b3" Jan 21 10:45:53 crc kubenswrapper[5119]: I0121 10:45:53.129497 5119 scope.go:117] "RemoveContainer" containerID="c2a3007cc8da8ee194c925278798ae4449d330ff93ebd1f6bb5a536eb1236b24" Jan 21 10:45:53 crc kubenswrapper[5119]: I0121 10:45:53.145700 5119 scope.go:117] "RemoveContainer" containerID="5552b45a7962c5b24741b62df86b8e49a4a1acffcce6b81b36cdb3a5cc1416e2" Jan 21 10:45:53 crc kubenswrapper[5119]: I0121 10:45:53.173551 5119 scope.go:117] "RemoveContainer" containerID="3895c8ae006c3a30aa062e47968a8470d647dfc18c0e6e4f8de52a746e7bfe5d" Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.131909 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483206-ztzsq"] Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.133436 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cce77d8a-2f31-4d99-8c60-490c5eb93bc5" containerName="collect-profiles" Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.133457 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce77d8a-2f31-4d99-8c60-490c5eb93bc5" containerName="collect-profiles" Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.133723 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="cce77d8a-2f31-4d99-8c60-490c5eb93bc5" containerName="collect-profiles" Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.515448 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483206-ztzsq"] Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.515592 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483206-ztzsq" Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.517714 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.517732 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.517957 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.597530 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4t4\" (UniqueName: \"kubernetes.io/projected/5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b-kube-api-access-tx4t4\") pod \"auto-csr-approver-29483206-ztzsq\" (UID: \"5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b\") " pod="openshift-infra/auto-csr-approver-29483206-ztzsq" Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.699064 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4t4\" (UniqueName: \"kubernetes.io/projected/5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b-kube-api-access-tx4t4\") pod \"auto-csr-approver-29483206-ztzsq\" (UID: \"5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b\") " pod="openshift-infra/auto-csr-approver-29483206-ztzsq" Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.724316 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4t4\" (UniqueName: \"kubernetes.io/projected/5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b-kube-api-access-tx4t4\") pod \"auto-csr-approver-29483206-ztzsq\" (UID: \"5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b\") " pod="openshift-infra/auto-csr-approver-29483206-ztzsq" Jan 21 10:46:00 crc kubenswrapper[5119]: I0121 10:46:00.831600 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483206-ztzsq" Jan 21 10:46:01 crc kubenswrapper[5119]: I0121 10:46:01.233494 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:46:01 crc kubenswrapper[5119]: I0121 10:46:01.239784 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483206-ztzsq"] Jan 21 10:46:01 crc kubenswrapper[5119]: I0121 10:46:01.525478 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483206-ztzsq" event={"ID":"5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b","Type":"ContainerStarted","Data":"4d762bece750aafc75435689feb816ab83c5da1a58e503b029cfcf3c57f5203a"} Jan 21 10:46:04 crc kubenswrapper[5119]: I0121 10:46:04.548897 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483206-ztzsq" event={"ID":"5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b","Type":"ContainerStarted","Data":"541f6eb23808a3d0ebeb18c0e30e89a0e6ddb593bb15918d2e4c61a16cba45ef"} Jan 21 10:46:04 crc kubenswrapper[5119]: I0121 10:46:04.573535 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483206-ztzsq" podStartSLOduration=1.766000509 podStartE2EDuration="4.573512838s" podCreationTimestamp="2026-01-21 10:46:00 +0000 UTC" firstStartedPulling="2026-01-21 10:46:01.233963126 +0000 UTC m=+3076.902054814" lastFinishedPulling="2026-01-21 10:46:04.041475465 +0000 UTC m=+3079.709567143" observedRunningTime="2026-01-21 10:46:04.563985228 +0000 UTC m=+3080.232076896" watchObservedRunningTime="2026-01-21 10:46:04.573512838 +0000 UTC m=+3080.241604526" Jan 21 10:46:05 crc kubenswrapper[5119]: I0121 10:46:05.556278 5119 generic.go:358] "Generic (PLEG): container finished" podID="5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b" containerID="541f6eb23808a3d0ebeb18c0e30e89a0e6ddb593bb15918d2e4c61a16cba45ef" exitCode=0 Jan 21 10:46:05 crc kubenswrapper[5119]: I0121 10:46:05.556321 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483206-ztzsq" event={"ID":"5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b","Type":"ContainerDied","Data":"541f6eb23808a3d0ebeb18c0e30e89a0e6ddb593bb15918d2e4c61a16cba45ef"} Jan 21 10:46:06 crc kubenswrapper[5119]: I0121 10:46:06.813281 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483206-ztzsq" Jan 21 10:46:06 crc kubenswrapper[5119]: I0121 10:46:06.990495 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4t4\" (UniqueName: \"kubernetes.io/projected/5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b-kube-api-access-tx4t4\") pod \"5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b\" (UID: \"5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b\") " Jan 21 10:46:06 crc kubenswrapper[5119]: I0121 10:46:06.997025 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b-kube-api-access-tx4t4" (OuterVolumeSpecName: "kube-api-access-tx4t4") pod "5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b" (UID: "5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b"). InnerVolumeSpecName "kube-api-access-tx4t4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:46:07 crc kubenswrapper[5119]: I0121 10:46:07.092491 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tx4t4\" (UniqueName: \"kubernetes.io/projected/5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b-kube-api-access-tx4t4\") on node \"crc\" DevicePath \"\"" Jan 21 10:46:07 crc kubenswrapper[5119]: I0121 10:46:07.578117 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483206-ztzsq" Jan 21 10:46:07 crc kubenswrapper[5119]: I0121 10:46:07.578115 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483206-ztzsq" event={"ID":"5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b","Type":"ContainerDied","Data":"4d762bece750aafc75435689feb816ab83c5da1a58e503b029cfcf3c57f5203a"} Jan 21 10:46:07 crc kubenswrapper[5119]: I0121 10:46:07.578619 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d762bece750aafc75435689feb816ab83c5da1a58e503b029cfcf3c57f5203a" Jan 21 10:46:07 crc kubenswrapper[5119]: I0121 10:46:07.631207 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483200-ljjdj"] Jan 21 10:46:07 crc kubenswrapper[5119]: I0121 10:46:07.638310 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483200-ljjdj"] Jan 21 10:46:08 crc kubenswrapper[5119]: I0121 10:46:08.599366 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a" path="/var/lib/kubelet/pods/2b5a4720-ab00-4fe0-8d45-4dee8cd7f10a/volumes" Jan 21 10:46:19 crc kubenswrapper[5119]: I0121 10:46:19.406869 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-kxghv"] Jan 21 10:46:19 crc kubenswrapper[5119]: I0121 10:46:19.408972 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b" containerName="oc" Jan 21 10:46:19 crc kubenswrapper[5119]: I0121 10:46:19.409000 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b" containerName="oc" Jan 21 10:46:19 crc kubenswrapper[5119]: I0121 10:46:19.409142 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b" containerName="oc" Jan 21 10:46:22 crc kubenswrapper[5119]: I0121 10:46:22.091603 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kxghv" Jan 21 10:46:22 crc kubenswrapper[5119]: I0121 10:46:22.102190 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-kxghv"] Jan 21 10:46:22 crc kubenswrapper[5119]: I0121 10:46:22.201030 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv6p2\" (UniqueName: \"kubernetes.io/projected/91d92907-3298-4f66-a38e-874f66525f7c-kube-api-access-tv6p2\") pod \"infrawatch-operators-kxghv\" (UID: \"91d92907-3298-4f66-a38e-874f66525f7c\") " pod="service-telemetry/infrawatch-operators-kxghv" Jan 21 10:46:22 crc kubenswrapper[5119]: I0121 10:46:22.303099 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tv6p2\" (UniqueName: \"kubernetes.io/projected/91d92907-3298-4f66-a38e-874f66525f7c-kube-api-access-tv6p2\") pod \"infrawatch-operators-kxghv\" (UID: \"91d92907-3298-4f66-a38e-874f66525f7c\") " pod="service-telemetry/infrawatch-operators-kxghv" Jan 21 10:46:22 crc kubenswrapper[5119]: I0121 10:46:22.326920 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv6p2\" (UniqueName: \"kubernetes.io/projected/91d92907-3298-4f66-a38e-874f66525f7c-kube-api-access-tv6p2\") pod \"infrawatch-operators-kxghv\" (UID: \"91d92907-3298-4f66-a38e-874f66525f7c\") " pod="service-telemetry/infrawatch-operators-kxghv" Jan 21 10:46:22 crc kubenswrapper[5119]: I0121 10:46:22.423865 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kxghv" Jan 21 10:46:22 crc kubenswrapper[5119]: I0121 10:46:22.614866 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-kxghv"] Jan 21 10:46:22 crc kubenswrapper[5119]: I0121 10:46:22.702435 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kxghv" event={"ID":"91d92907-3298-4f66-a38e-874f66525f7c","Type":"ContainerStarted","Data":"24c69d02805df529ae2a0a68205a080b660d350d60fc180e6381417d0a1b4a21"} Jan 21 10:46:23 crc kubenswrapper[5119]: I0121 10:46:23.711938 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kxghv" event={"ID":"91d92907-3298-4f66-a38e-874f66525f7c","Type":"ContainerStarted","Data":"639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af"} Jan 21 10:46:23 crc kubenswrapper[5119]: I0121 10:46:23.735899 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-kxghv" podStartSLOduration=4.418139252 podStartE2EDuration="4.735872701s" podCreationTimestamp="2026-01-21 10:46:19 +0000 UTC" firstStartedPulling="2026-01-21 10:46:22.625156735 +0000 UTC m=+3098.293248413" lastFinishedPulling="2026-01-21 10:46:22.942890184 +0000 UTC m=+3098.610981862" observedRunningTime="2026-01-21 10:46:23.729484938 +0000 UTC m=+3099.397576646" watchObservedRunningTime="2026-01-21 10:46:23.735872701 +0000 UTC m=+3099.403964399" Jan 21 10:46:32 crc kubenswrapper[5119]: I0121 10:46:32.424369 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-kxghv" Jan 21 10:46:32 crc kubenswrapper[5119]: I0121 10:46:32.425257 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-kxghv" Jan 21 10:46:32 crc kubenswrapper[5119]: I0121 10:46:32.454942 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-kxghv" Jan 21 10:46:32 crc kubenswrapper[5119]: I0121 10:46:32.812497 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-kxghv" Jan 21 10:46:32 crc kubenswrapper[5119]: I0121 10:46:32.860429 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-kxghv"] Jan 21 10:46:34 crc kubenswrapper[5119]: I0121 10:46:34.793374 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-kxghv" podUID="91d92907-3298-4f66-a38e-874f66525f7c" containerName="registry-server" containerID="cri-o://639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af" gracePeriod=2 Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.670277 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kxghv" Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.700380 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv6p2\" (UniqueName: \"kubernetes.io/projected/91d92907-3298-4f66-a38e-874f66525f7c-kube-api-access-tv6p2\") pod \"91d92907-3298-4f66-a38e-874f66525f7c\" (UID: \"91d92907-3298-4f66-a38e-874f66525f7c\") " Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.712719 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d92907-3298-4f66-a38e-874f66525f7c-kube-api-access-tv6p2" (OuterVolumeSpecName: "kube-api-access-tv6p2") pod "91d92907-3298-4f66-a38e-874f66525f7c" (UID: "91d92907-3298-4f66-a38e-874f66525f7c"). InnerVolumeSpecName "kube-api-access-tv6p2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.801123 5119 generic.go:358] "Generic (PLEG): container finished" podID="91d92907-3298-4f66-a38e-874f66525f7c" containerID="639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af" exitCode=0 Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.801233 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kxghv" Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.801232 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kxghv" event={"ID":"91d92907-3298-4f66-a38e-874f66525f7c","Type":"ContainerDied","Data":"639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af"} Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.801351 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kxghv" event={"ID":"91d92907-3298-4f66-a38e-874f66525f7c","Type":"ContainerDied","Data":"24c69d02805df529ae2a0a68205a080b660d350d60fc180e6381417d0a1b4a21"} Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.801372 5119 scope.go:117] "RemoveContainer" containerID="639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af" Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.802542 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tv6p2\" (UniqueName: \"kubernetes.io/projected/91d92907-3298-4f66-a38e-874f66525f7c-kube-api-access-tv6p2\") on node \"crc\" DevicePath \"\"" Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.826112 5119 scope.go:117] "RemoveContainer" containerID="639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af" Jan 21 10:46:35 crc kubenswrapper[5119]: E0121 10:46:35.826558 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af\": container with ID starting with 639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af not found: ID does not exist" containerID="639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af" Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.826584 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af"} err="failed to get container status \"639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af\": rpc error: code = NotFound desc = could not find container \"639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af\": container with ID starting with 639df501a3df4512e7eccec47da518dc790de223a5d74c5a5a10e4005137e8af not found: ID does not exist" Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.829379 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-kxghv"] Jan 21 10:46:35 crc kubenswrapper[5119]: I0121 10:46:35.835217 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-kxghv"] Jan 21 10:46:36 crc kubenswrapper[5119]: I0121 10:46:36.603302 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d92907-3298-4f66-a38e-874f66525f7c" path="/var/lib/kubelet/pods/91d92907-3298-4f66-a38e-874f66525f7c/volumes" Jan 21 10:46:53 crc kubenswrapper[5119]: I0121 10:46:53.228767 5119 scope.go:117] "RemoveContainer" containerID="dfc7a70a08ead445df8b85a3dc137559970f5bfe79191da04256d017adf8e4f7" Jan 21 10:47:19 crc kubenswrapper[5119]: I0121 10:47:19.919353 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:47:19 crc kubenswrapper[5119]: I0121 10:47:19.919975 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:47:49 crc kubenswrapper[5119]: I0121 10:47:49.918871 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:47:49 crc kubenswrapper[5119]: I0121 10:47:49.919468 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.156826 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483208-pq4pr"] Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.158051 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91d92907-3298-4f66-a38e-874f66525f7c" containerName="registry-server" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.158068 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d92907-3298-4f66-a38e-874f66525f7c" containerName="registry-server" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.158224 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="91d92907-3298-4f66-a38e-874f66525f7c" containerName="registry-server" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.163355 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483208-pq4pr" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.165870 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.168848 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.169258 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.169588 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483208-pq4pr"] Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.304243 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx78s\" (UniqueName: \"kubernetes.io/projected/8b7059f3-9404-412b-be12-e158a9c9c5f9-kube-api-access-gx78s\") pod \"auto-csr-approver-29483208-pq4pr\" (UID: \"8b7059f3-9404-412b-be12-e158a9c9c5f9\") " pod="openshift-infra/auto-csr-approver-29483208-pq4pr" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.405952 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gx78s\" (UniqueName: \"kubernetes.io/projected/8b7059f3-9404-412b-be12-e158a9c9c5f9-kube-api-access-gx78s\") pod \"auto-csr-approver-29483208-pq4pr\" (UID: \"8b7059f3-9404-412b-be12-e158a9c9c5f9\") " pod="openshift-infra/auto-csr-approver-29483208-pq4pr" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.426479 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx78s\" (UniqueName: \"kubernetes.io/projected/8b7059f3-9404-412b-be12-e158a9c9c5f9-kube-api-access-gx78s\") pod \"auto-csr-approver-29483208-pq4pr\" (UID: \"8b7059f3-9404-412b-be12-e158a9c9c5f9\") " pod="openshift-infra/auto-csr-approver-29483208-pq4pr" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.489910 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483208-pq4pr" Jan 21 10:48:00 crc kubenswrapper[5119]: I0121 10:48:00.674905 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483208-pq4pr"] Jan 21 10:48:01 crc kubenswrapper[5119]: I0121 10:48:01.463100 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483208-pq4pr" event={"ID":"8b7059f3-9404-412b-be12-e158a9c9c5f9","Type":"ContainerStarted","Data":"e11e1648bfc4094bb370de86f72bc047accd54fad3254abf14d9af9223ac01cc"} Jan 21 10:48:03 crc kubenswrapper[5119]: I0121 10:48:03.480425 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483208-pq4pr" event={"ID":"8b7059f3-9404-412b-be12-e158a9c9c5f9","Type":"ContainerStarted","Data":"b4537ad2705cf1bdc3d3096b31c3ddd2c9ebec03c36692e643ab0cdca3111ee8"} Jan 21 10:48:04 crc kubenswrapper[5119]: I0121 10:48:04.505189 5119 generic.go:358] "Generic (PLEG): container finished" podID="8b7059f3-9404-412b-be12-e158a9c9c5f9" containerID="b4537ad2705cf1bdc3d3096b31c3ddd2c9ebec03c36692e643ab0cdca3111ee8" exitCode=0 Jan 21 10:48:04 crc kubenswrapper[5119]: I0121 10:48:04.505300 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483208-pq4pr" event={"ID":"8b7059f3-9404-412b-be12-e158a9c9c5f9","Type":"ContainerDied","Data":"b4537ad2705cf1bdc3d3096b31c3ddd2c9ebec03c36692e643ab0cdca3111ee8"} Jan 21 10:48:05 crc kubenswrapper[5119]: I0121 10:48:05.813545 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483208-pq4pr" Jan 21 10:48:05 crc kubenswrapper[5119]: I0121 10:48:05.893904 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx78s\" (UniqueName: \"kubernetes.io/projected/8b7059f3-9404-412b-be12-e158a9c9c5f9-kube-api-access-gx78s\") pod \"8b7059f3-9404-412b-be12-e158a9c9c5f9\" (UID: \"8b7059f3-9404-412b-be12-e158a9c9c5f9\") " Jan 21 10:48:05 crc kubenswrapper[5119]: I0121 10:48:05.899933 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b7059f3-9404-412b-be12-e158a9c9c5f9-kube-api-access-gx78s" (OuterVolumeSpecName: "kube-api-access-gx78s") pod "8b7059f3-9404-412b-be12-e158a9c9c5f9" (UID: "8b7059f3-9404-412b-be12-e158a9c9c5f9"). InnerVolumeSpecName "kube-api-access-gx78s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:48:05 crc kubenswrapper[5119]: I0121 10:48:05.995656 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gx78s\" (UniqueName: \"kubernetes.io/projected/8b7059f3-9404-412b-be12-e158a9c9c5f9-kube-api-access-gx78s\") on node \"crc\" DevicePath \"\"" Jan 21 10:48:06 crc kubenswrapper[5119]: I0121 10:48:06.522279 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483208-pq4pr" event={"ID":"8b7059f3-9404-412b-be12-e158a9c9c5f9","Type":"ContainerDied","Data":"e11e1648bfc4094bb370de86f72bc047accd54fad3254abf14d9af9223ac01cc"} Jan 21 10:48:06 crc kubenswrapper[5119]: I0121 10:48:06.522319 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e11e1648bfc4094bb370de86f72bc047accd54fad3254abf14d9af9223ac01cc" Jan 21 10:48:06 crc kubenswrapper[5119]: I0121 10:48:06.522477 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483208-pq4pr" Jan 21 10:48:06 crc kubenswrapper[5119]: I0121 10:48:06.875291 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483202-fpc66"] Jan 21 10:48:06 crc kubenswrapper[5119]: I0121 10:48:06.886454 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483202-fpc66"] Jan 21 10:48:08 crc kubenswrapper[5119]: I0121 10:48:08.598958 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="559c4288-2bd8-4527-80c7-5928414f4caf" path="/var/lib/kubelet/pods/559c4288-2bd8-4527-80c7-5928414f4caf/volumes" Jan 21 10:48:19 crc kubenswrapper[5119]: I0121 10:48:19.918870 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:48:19 crc kubenswrapper[5119]: I0121 10:48:19.919308 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:48:19 crc kubenswrapper[5119]: I0121 10:48:19.919345 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:48:19 crc kubenswrapper[5119]: I0121 10:48:19.919839 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:48:19 crc kubenswrapper[5119]: I0121 10:48:19.919895 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" gracePeriod=600 Jan 21 10:48:20 crc kubenswrapper[5119]: E0121 10:48:20.332062 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:48:20 crc kubenswrapper[5119]: I0121 10:48:20.638703 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" exitCode=0 Jan 21 10:48:20 crc kubenswrapper[5119]: I0121 10:48:20.638796 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f"} Jan 21 10:48:20 crc kubenswrapper[5119]: I0121 10:48:20.638976 5119 scope.go:117] "RemoveContainer" containerID="8c6b266358486376a0f62632d71df43c8b4cdbd094647fbf4ce48a482ef4cbb9" Jan 21 10:48:20 crc kubenswrapper[5119]: I0121 10:48:20.639382 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:48:20 crc kubenswrapper[5119]: E0121 10:48:20.639771 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:48:33 crc kubenswrapper[5119]: I0121 10:48:33.590980 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:48:33 crc kubenswrapper[5119]: E0121 10:48:33.591757 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:48:44 crc kubenswrapper[5119]: I0121 10:48:44.597526 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:48:44 crc kubenswrapper[5119]: E0121 10:48:44.598014 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:48:53 crc kubenswrapper[5119]: I0121 10:48:53.380111 5119 scope.go:117] "RemoveContainer" containerID="c6445e50aefb830e8dd643e4358623fa4bb7a21773f47ae7c368475223d48453" Jan 21 10:48:56 crc kubenswrapper[5119]: I0121 10:48:56.590923 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:48:56 crc kubenswrapper[5119]: E0121 10:48:56.591702 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:49:11 crc kubenswrapper[5119]: I0121 10:49:11.590651 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:49:11 crc kubenswrapper[5119]: E0121 10:49:11.591456 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:49:15 crc kubenswrapper[5119]: I0121 10:49:15.946582 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xbmsx"] Jan 21 10:49:15 crc kubenswrapper[5119]: I0121 10:49:15.947554 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b7059f3-9404-412b-be12-e158a9c9c5f9" containerName="oc" Jan 21 10:49:15 crc kubenswrapper[5119]: I0121 10:49:15.947570 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b7059f3-9404-412b-be12-e158a9c9c5f9" containerName="oc" Jan 21 10:49:15 crc kubenswrapper[5119]: I0121 10:49:15.947745 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="8b7059f3-9404-412b-be12-e158a9c9c5f9" containerName="oc" Jan 21 10:49:16 crc kubenswrapper[5119]: I0121 10:49:16.901195 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:16 crc kubenswrapper[5119]: I0121 10:49:16.911547 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xbmsx"] Jan 21 10:49:16 crc kubenswrapper[5119]: I0121 10:49:16.956934 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-catalog-content\") pod \"community-operators-xbmsx\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:16 crc kubenswrapper[5119]: I0121 10:49:16.956997 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8ztd\" (UniqueName: \"kubernetes.io/projected/db96b6cc-070c-4f62-ad0d-b72a93a8384b-kube-api-access-q8ztd\") pod \"community-operators-xbmsx\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:16 crc kubenswrapper[5119]: I0121 10:49:16.957109 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-utilities\") pod \"community-operators-xbmsx\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:17 crc kubenswrapper[5119]: I0121 10:49:17.059026 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-catalog-content\") pod \"community-operators-xbmsx\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:17 crc kubenswrapper[5119]: I0121 10:49:17.059095 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q8ztd\" (UniqueName: \"kubernetes.io/projected/db96b6cc-070c-4f62-ad0d-b72a93a8384b-kube-api-access-q8ztd\") pod \"community-operators-xbmsx\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:17 crc kubenswrapper[5119]: I0121 10:49:17.059276 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-utilities\") pod \"community-operators-xbmsx\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:17 crc kubenswrapper[5119]: I0121 10:49:17.059950 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-utilities\") pod \"community-operators-xbmsx\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:17 crc kubenswrapper[5119]: I0121 10:49:17.060093 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-catalog-content\") pod \"community-operators-xbmsx\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:17 crc kubenswrapper[5119]: I0121 10:49:17.101572 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8ztd\" (UniqueName: \"kubernetes.io/projected/db96b6cc-070c-4f62-ad0d-b72a93a8384b-kube-api-access-q8ztd\") pod \"community-operators-xbmsx\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:17 crc kubenswrapper[5119]: I0121 10:49:17.219136 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:17 crc kubenswrapper[5119]: I0121 10:49:17.659023 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xbmsx"] Jan 21 10:49:18 crc kubenswrapper[5119]: I0121 10:49:18.082840 5119 generic.go:358] "Generic (PLEG): container finished" podID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" containerID="f35502eb3e4082409da87e8ccbcdca3c71ae1e35405ea55f691d541c3872cdef" exitCode=0 Jan 21 10:49:18 crc kubenswrapper[5119]: I0121 10:49:18.083076 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbmsx" event={"ID":"db96b6cc-070c-4f62-ad0d-b72a93a8384b","Type":"ContainerDied","Data":"f35502eb3e4082409da87e8ccbcdca3c71ae1e35405ea55f691d541c3872cdef"} Jan 21 10:49:18 crc kubenswrapper[5119]: I0121 10:49:18.083265 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbmsx" event={"ID":"db96b6cc-070c-4f62-ad0d-b72a93a8384b","Type":"ContainerStarted","Data":"74c3892334a59806395b8c1a2ee13ea0fc9ca0ff9f209f4159b10f886e362bc6"} Jan 21 10:49:19 crc kubenswrapper[5119]: I0121 10:49:19.109695 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbmsx" event={"ID":"db96b6cc-070c-4f62-ad0d-b72a93a8384b","Type":"ContainerStarted","Data":"46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a"} Jan 21 10:49:20 crc kubenswrapper[5119]: I0121 10:49:20.118511 5119 generic.go:358] "Generic (PLEG): container finished" podID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" containerID="46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a" exitCode=0 Jan 21 10:49:20 crc kubenswrapper[5119]: I0121 10:49:20.118717 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbmsx" event={"ID":"db96b6cc-070c-4f62-ad0d-b72a93a8384b","Type":"ContainerDied","Data":"46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a"} Jan 21 10:49:21 crc kubenswrapper[5119]: I0121 10:49:21.128621 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbmsx" event={"ID":"db96b6cc-070c-4f62-ad0d-b72a93a8384b","Type":"ContainerStarted","Data":"2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7"} Jan 21 10:49:21 crc kubenswrapper[5119]: I0121 10:49:21.147452 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xbmsx" podStartSLOduration=5.315983782 podStartE2EDuration="6.147432036s" podCreationTimestamp="2026-01-21 10:49:15 +0000 UTC" firstStartedPulling="2026-01-21 10:49:18.083872437 +0000 UTC m=+3273.751964115" lastFinishedPulling="2026-01-21 10:49:18.915320691 +0000 UTC m=+3274.583412369" observedRunningTime="2026-01-21 10:49:21.143293424 +0000 UTC m=+3276.811385112" watchObservedRunningTime="2026-01-21 10:49:21.147432036 +0000 UTC m=+3276.815523734" Jan 21 10:49:22 crc kubenswrapper[5119]: I0121 10:49:22.590925 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:49:22 crc kubenswrapper[5119]: E0121 10:49:22.591147 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:49:27 crc kubenswrapper[5119]: I0121 10:49:27.220152 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:27 crc kubenswrapper[5119]: I0121 10:49:27.220503 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:27 crc kubenswrapper[5119]: I0121 10:49:27.262555 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:28 crc kubenswrapper[5119]: I0121 10:49:28.217153 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:28 crc kubenswrapper[5119]: I0121 10:49:28.257454 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xbmsx"] Jan 21 10:49:30 crc kubenswrapper[5119]: I0121 10:49:30.195587 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xbmsx" podUID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" containerName="registry-server" containerID="cri-o://2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7" gracePeriod=2 Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.047410 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.182262 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-catalog-content\") pod \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.182423 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8ztd\" (UniqueName: \"kubernetes.io/projected/db96b6cc-070c-4f62-ad0d-b72a93a8384b-kube-api-access-q8ztd\") pod \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.182449 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-utilities\") pod \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\" (UID: \"db96b6cc-070c-4f62-ad0d-b72a93a8384b\") " Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.183949 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-utilities" (OuterVolumeSpecName: "utilities") pod "db96b6cc-070c-4f62-ad0d-b72a93a8384b" (UID: "db96b6cc-070c-4f62-ad0d-b72a93a8384b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.191481 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db96b6cc-070c-4f62-ad0d-b72a93a8384b-kube-api-access-q8ztd" (OuterVolumeSpecName: "kube-api-access-q8ztd") pod "db96b6cc-070c-4f62-ad0d-b72a93a8384b" (UID: "db96b6cc-070c-4f62-ad0d-b72a93a8384b"). InnerVolumeSpecName "kube-api-access-q8ztd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.204923 5119 generic.go:358] "Generic (PLEG): container finished" podID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" containerID="2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7" exitCode=0 Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.205018 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbmsx" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.205021 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbmsx" event={"ID":"db96b6cc-070c-4f62-ad0d-b72a93a8384b","Type":"ContainerDied","Data":"2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7"} Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.205958 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbmsx" event={"ID":"db96b6cc-070c-4f62-ad0d-b72a93a8384b","Type":"ContainerDied","Data":"74c3892334a59806395b8c1a2ee13ea0fc9ca0ff9f209f4159b10f886e362bc6"} Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.205983 5119 scope.go:117] "RemoveContainer" containerID="2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.226423 5119 scope.go:117] "RemoveContainer" containerID="46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.238241 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db96b6cc-070c-4f62-ad0d-b72a93a8384b" (UID: "db96b6cc-070c-4f62-ad0d-b72a93a8384b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.260973 5119 scope.go:117] "RemoveContainer" containerID="f35502eb3e4082409da87e8ccbcdca3c71ae1e35405ea55f691d541c3872cdef" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.276681 5119 scope.go:117] "RemoveContainer" containerID="2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7" Jan 21 10:49:31 crc kubenswrapper[5119]: E0121 10:49:31.277518 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7\": container with ID starting with 2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7 not found: ID does not exist" containerID="2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.277582 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7"} err="failed to get container status \"2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7\": rpc error: code = NotFound desc = could not find container \"2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7\": container with ID starting with 2cf32246b29e85f8fb3a5d975d50e1b185e41bc50e716d32e4195693c3171da7 not found: ID does not exist" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.277634 5119 scope.go:117] "RemoveContainer" containerID="46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a" Jan 21 10:49:31 crc kubenswrapper[5119]: E0121 10:49:31.278030 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a\": container with ID starting with 46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a not found: ID does not exist" containerID="46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.278065 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a"} err="failed to get container status \"46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a\": rpc error: code = NotFound desc = could not find container \"46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a\": container with ID starting with 46819dbe81328a2e944f574612e6d9120a76f398ffead7cc352db5153af54f8a not found: ID does not exist" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.278085 5119 scope.go:117] "RemoveContainer" containerID="f35502eb3e4082409da87e8ccbcdca3c71ae1e35405ea55f691d541c3872cdef" Jan 21 10:49:31 crc kubenswrapper[5119]: E0121 10:49:31.278265 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f35502eb3e4082409da87e8ccbcdca3c71ae1e35405ea55f691d541c3872cdef\": container with ID starting with f35502eb3e4082409da87e8ccbcdca3c71ae1e35405ea55f691d541c3872cdef not found: ID does not exist" containerID="f35502eb3e4082409da87e8ccbcdca3c71ae1e35405ea55f691d541c3872cdef" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.278284 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35502eb3e4082409da87e8ccbcdca3c71ae1e35405ea55f691d541c3872cdef"} err="failed to get container status \"f35502eb3e4082409da87e8ccbcdca3c71ae1e35405ea55f691d541c3872cdef\": rpc error: code = NotFound desc = could not find container \"f35502eb3e4082409da87e8ccbcdca3c71ae1e35405ea55f691d541c3872cdef\": container with ID starting with f35502eb3e4082409da87e8ccbcdca3c71ae1e35405ea55f691d541c3872cdef not found: ID does not exist" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.285534 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.285654 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q8ztd\" (UniqueName: \"kubernetes.io/projected/db96b6cc-070c-4f62-ad0d-b72a93a8384b-kube-api-access-q8ztd\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.285668 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db96b6cc-070c-4f62-ad0d-b72a93a8384b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.539879 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xbmsx"] Jan 21 10:49:31 crc kubenswrapper[5119]: I0121 10:49:31.547715 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xbmsx"] Jan 21 10:49:32 crc kubenswrapper[5119]: I0121 10:49:32.615448 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" path="/var/lib/kubelet/pods/db96b6cc-070c-4f62-ad0d-b72a93a8384b/volumes" Jan 21 10:49:36 crc kubenswrapper[5119]: I0121 10:49:36.591340 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:49:36 crc kubenswrapper[5119]: E0121 10:49:36.591983 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:49:46 crc kubenswrapper[5119]: I0121 10:49:46.724113 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:49:46 crc kubenswrapper[5119]: I0121 10:49:46.725766 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:49:46 crc kubenswrapper[5119]: I0121 10:49:46.728898 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:49:46 crc kubenswrapper[5119]: I0121 10:49:46.729618 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:49:49 crc kubenswrapper[5119]: I0121 10:49:49.590441 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:49:49 crc kubenswrapper[5119]: E0121 10:49:49.590972 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.129885 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483210-bbltn"] Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.130990 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" containerName="extract-content" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.131003 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" containerName="extract-content" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.131013 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" containerName="registry-server" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.131019 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" containerName="registry-server" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.131033 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" containerName="extract-utilities" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.131039 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" containerName="extract-utilities" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.131152 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="db96b6cc-070c-4f62-ad0d-b72a93a8384b" containerName="registry-server" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.141768 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483210-bbltn" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.144167 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.145803 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.146015 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.146904 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483210-bbltn"] Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.305413 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pssmq\" (UniqueName: \"kubernetes.io/projected/7f8b3da1-c781-4d0d-9233-b27658d65749-kube-api-access-pssmq\") pod \"auto-csr-approver-29483210-bbltn\" (UID: \"7f8b3da1-c781-4d0d-9233-b27658d65749\") " pod="openshift-infra/auto-csr-approver-29483210-bbltn" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.407016 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pssmq\" (UniqueName: \"kubernetes.io/projected/7f8b3da1-c781-4d0d-9233-b27658d65749-kube-api-access-pssmq\") pod \"auto-csr-approver-29483210-bbltn\" (UID: \"7f8b3da1-c781-4d0d-9233-b27658d65749\") " pod="openshift-infra/auto-csr-approver-29483210-bbltn" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.430796 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pssmq\" (UniqueName: \"kubernetes.io/projected/7f8b3da1-c781-4d0d-9233-b27658d65749-kube-api-access-pssmq\") pod \"auto-csr-approver-29483210-bbltn\" (UID: \"7f8b3da1-c781-4d0d-9233-b27658d65749\") " pod="openshift-infra/auto-csr-approver-29483210-bbltn" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.469218 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483210-bbltn" Jan 21 10:50:00 crc kubenswrapper[5119]: I0121 10:50:00.660290 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483210-bbltn"] Jan 21 10:50:01 crc kubenswrapper[5119]: I0121 10:50:01.458475 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483210-bbltn" event={"ID":"7f8b3da1-c781-4d0d-9233-b27658d65749","Type":"ContainerStarted","Data":"5a7c03e0aa999531544f9c741e7e92c46be3001d7b0817bf242e56ccb72956c4"} Jan 21 10:50:02 crc kubenswrapper[5119]: I0121 10:50:02.467007 5119 generic.go:358] "Generic (PLEG): container finished" podID="7f8b3da1-c781-4d0d-9233-b27658d65749" containerID="a6bdbb55faaac5e11e930855f391b6906a9360e06d3ffb2abee9fefb8df736b2" exitCode=0 Jan 21 10:50:02 crc kubenswrapper[5119]: I0121 10:50:02.467634 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483210-bbltn" event={"ID":"7f8b3da1-c781-4d0d-9233-b27658d65749","Type":"ContainerDied","Data":"a6bdbb55faaac5e11e930855f391b6906a9360e06d3ffb2abee9fefb8df736b2"} Jan 21 10:50:03 crc kubenswrapper[5119]: I0121 10:50:03.591998 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:50:03 crc kubenswrapper[5119]: E0121 10:50:03.592576 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:50:03 crc kubenswrapper[5119]: I0121 10:50:03.742558 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483210-bbltn" Jan 21 10:50:03 crc kubenswrapper[5119]: I0121 10:50:03.857969 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pssmq\" (UniqueName: \"kubernetes.io/projected/7f8b3da1-c781-4d0d-9233-b27658d65749-kube-api-access-pssmq\") pod \"7f8b3da1-c781-4d0d-9233-b27658d65749\" (UID: \"7f8b3da1-c781-4d0d-9233-b27658d65749\") " Jan 21 10:50:03 crc kubenswrapper[5119]: I0121 10:50:03.864724 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8b3da1-c781-4d0d-9233-b27658d65749-kube-api-access-pssmq" (OuterVolumeSpecName: "kube-api-access-pssmq") pod "7f8b3da1-c781-4d0d-9233-b27658d65749" (UID: "7f8b3da1-c781-4d0d-9233-b27658d65749"). InnerVolumeSpecName "kube-api-access-pssmq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:50:03 crc kubenswrapper[5119]: I0121 10:50:03.959805 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pssmq\" (UniqueName: \"kubernetes.io/projected/7f8b3da1-c781-4d0d-9233-b27658d65749-kube-api-access-pssmq\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:04 crc kubenswrapper[5119]: I0121 10:50:04.495282 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483210-bbltn" event={"ID":"7f8b3da1-c781-4d0d-9233-b27658d65749","Type":"ContainerDied","Data":"5a7c03e0aa999531544f9c741e7e92c46be3001d7b0817bf242e56ccb72956c4"} Jan 21 10:50:04 crc kubenswrapper[5119]: I0121 10:50:04.495332 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a7c03e0aa999531544f9c741e7e92c46be3001d7b0817bf242e56ccb72956c4" Jan 21 10:50:04 crc kubenswrapper[5119]: I0121 10:50:04.495391 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483210-bbltn" Jan 21 10:50:04 crc kubenswrapper[5119]: I0121 10:50:04.813809 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483204-bpbjq"] Jan 21 10:50:04 crc kubenswrapper[5119]: I0121 10:50:04.825250 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483204-bpbjq"] Jan 21 10:50:06 crc kubenswrapper[5119]: I0121 10:50:06.598691 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3e790a-dae4-4bb2-a739-89114727281c" path="/var/lib/kubelet/pods/0b3e790a-dae4-4bb2-a739-89114727281c/volumes" Jan 21 10:50:12 crc kubenswrapper[5119]: I0121 10:50:12.886807 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lfpd4"] Jan 21 10:50:12 crc kubenswrapper[5119]: I0121 10:50:12.888096 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f8b3da1-c781-4d0d-9233-b27658d65749" containerName="oc" Jan 21 10:50:12 crc kubenswrapper[5119]: I0121 10:50:12.888110 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b3da1-c781-4d0d-9233-b27658d65749" containerName="oc" Jan 21 10:50:12 crc kubenswrapper[5119]: I0121 10:50:12.888264 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f8b3da1-c781-4d0d-9233-b27658d65749" containerName="oc" Jan 21 10:50:12 crc kubenswrapper[5119]: I0121 10:50:12.905751 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfpd4"] Jan 21 10:50:12 crc kubenswrapper[5119]: I0121 10:50:12.905917 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:13 crc kubenswrapper[5119]: I0121 10:50:13.001529 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-utilities\") pod \"redhat-operators-lfpd4\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:13 crc kubenswrapper[5119]: I0121 10:50:13.001573 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5rcq\" (UniqueName: \"kubernetes.io/projected/e578d8a2-5c12-4d79-85c0-700763083130-kube-api-access-j5rcq\") pod \"redhat-operators-lfpd4\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:13 crc kubenswrapper[5119]: I0121 10:50:13.001598 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-catalog-content\") pod \"redhat-operators-lfpd4\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:13 crc kubenswrapper[5119]: I0121 10:50:13.103210 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-utilities\") pod \"redhat-operators-lfpd4\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:13 crc kubenswrapper[5119]: I0121 10:50:13.103257 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j5rcq\" (UniqueName: \"kubernetes.io/projected/e578d8a2-5c12-4d79-85c0-700763083130-kube-api-access-j5rcq\") pod \"redhat-operators-lfpd4\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:13 crc kubenswrapper[5119]: I0121 10:50:13.103289 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-catalog-content\") pod \"redhat-operators-lfpd4\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:13 crc kubenswrapper[5119]: I0121 10:50:13.103698 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-utilities\") pod \"redhat-operators-lfpd4\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:13 crc kubenswrapper[5119]: I0121 10:50:13.103762 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-catalog-content\") pod \"redhat-operators-lfpd4\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:13 crc kubenswrapper[5119]: I0121 10:50:13.123572 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5rcq\" (UniqueName: \"kubernetes.io/projected/e578d8a2-5c12-4d79-85c0-700763083130-kube-api-access-j5rcq\") pod \"redhat-operators-lfpd4\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:13 crc kubenswrapper[5119]: I0121 10:50:13.226825 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:13 crc kubenswrapper[5119]: W0121 10:50:13.550882 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode578d8a2_5c12_4d79_85c0_700763083130.slice/crio-df57b615e256643638e441c10b8cc6177b3a254ec6ea3941bf428a01fba76f96 WatchSource:0}: Error finding container df57b615e256643638e441c10b8cc6177b3a254ec6ea3941bf428a01fba76f96: Status 404 returned error can't find the container with id df57b615e256643638e441c10b8cc6177b3a254ec6ea3941bf428a01fba76f96 Jan 21 10:50:13 crc kubenswrapper[5119]: I0121 10:50:13.553726 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfpd4"] Jan 21 10:50:14 crc kubenswrapper[5119]: I0121 10:50:14.567309 5119 generic.go:358] "Generic (PLEG): container finished" podID="e578d8a2-5c12-4d79-85c0-700763083130" containerID="c83c0485bf356c335e092331d135c6807c733bc04005396b400cb103a5a27c19" exitCode=0 Jan 21 10:50:14 crc kubenswrapper[5119]: I0121 10:50:14.567365 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfpd4" event={"ID":"e578d8a2-5c12-4d79-85c0-700763083130","Type":"ContainerDied","Data":"c83c0485bf356c335e092331d135c6807c733bc04005396b400cb103a5a27c19"} Jan 21 10:50:14 crc kubenswrapper[5119]: I0121 10:50:14.567763 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfpd4" event={"ID":"e578d8a2-5c12-4d79-85c0-700763083130","Type":"ContainerStarted","Data":"df57b615e256643638e441c10b8cc6177b3a254ec6ea3941bf428a01fba76f96"} Jan 21 10:50:14 crc kubenswrapper[5119]: I0121 10:50:14.597968 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:50:14 crc kubenswrapper[5119]: E0121 10:50:14.598261 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:50:15 crc kubenswrapper[5119]: I0121 10:50:15.589709 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfpd4" event={"ID":"e578d8a2-5c12-4d79-85c0-700763083130","Type":"ContainerStarted","Data":"d172ff30020a71c96572026022906186230423b2bb626fa036b6b0698d2c689e"} Jan 21 10:50:16 crc kubenswrapper[5119]: I0121 10:50:16.618139 5119 generic.go:358] "Generic (PLEG): container finished" podID="e578d8a2-5c12-4d79-85c0-700763083130" containerID="d172ff30020a71c96572026022906186230423b2bb626fa036b6b0698d2c689e" exitCode=0 Jan 21 10:50:16 crc kubenswrapper[5119]: I0121 10:50:16.618194 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfpd4" event={"ID":"e578d8a2-5c12-4d79-85c0-700763083130","Type":"ContainerDied","Data":"d172ff30020a71c96572026022906186230423b2bb626fa036b6b0698d2c689e"} Jan 21 10:50:17 crc kubenswrapper[5119]: I0121 10:50:17.627359 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfpd4" event={"ID":"e578d8a2-5c12-4d79-85c0-700763083130","Type":"ContainerStarted","Data":"c71f94f7d49a4ae66cfe5edbf36058341b926bdce9856a5945e867143b2edc30"} Jan 21 10:50:23 crc kubenswrapper[5119]: I0121 10:50:23.227755 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:23 crc kubenswrapper[5119]: I0121 10:50:23.228844 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:23 crc kubenswrapper[5119]: I0121 10:50:23.267851 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:23 crc kubenswrapper[5119]: I0121 10:50:23.286512 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lfpd4" podStartSLOduration=10.567466142 podStartE2EDuration="11.286492166s" podCreationTimestamp="2026-01-21 10:50:12 +0000 UTC" firstStartedPulling="2026-01-21 10:50:14.568392504 +0000 UTC m=+3330.236484172" lastFinishedPulling="2026-01-21 10:50:15.287418518 +0000 UTC m=+3330.955510196" observedRunningTime="2026-01-21 10:50:17.644426243 +0000 UTC m=+3333.312517921" watchObservedRunningTime="2026-01-21 10:50:23.286492166 +0000 UTC m=+3338.954583854" Jan 21 10:50:23 crc kubenswrapper[5119]: I0121 10:50:23.705733 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:23 crc kubenswrapper[5119]: I0121 10:50:23.747333 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lfpd4"] Jan 21 10:50:25 crc kubenswrapper[5119]: I0121 10:50:25.679073 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lfpd4" podUID="e578d8a2-5c12-4d79-85c0-700763083130" containerName="registry-server" containerID="cri-o://c71f94f7d49a4ae66cfe5edbf36058341b926bdce9856a5945e867143b2edc30" gracePeriod=2 Jan 21 10:50:26 crc kubenswrapper[5119]: I0121 10:50:26.688105 5119 generic.go:358] "Generic (PLEG): container finished" podID="e578d8a2-5c12-4d79-85c0-700763083130" containerID="c71f94f7d49a4ae66cfe5edbf36058341b926bdce9856a5945e867143b2edc30" exitCode=0 Jan 21 10:50:26 crc kubenswrapper[5119]: I0121 10:50:26.688167 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfpd4" event={"ID":"e578d8a2-5c12-4d79-85c0-700763083130","Type":"ContainerDied","Data":"c71f94f7d49a4ae66cfe5edbf36058341b926bdce9856a5945e867143b2edc30"} Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.693346 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.700085 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfpd4" event={"ID":"e578d8a2-5c12-4d79-85c0-700763083130","Type":"ContainerDied","Data":"df57b615e256643638e441c10b8cc6177b3a254ec6ea3941bf428a01fba76f96"} Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.700139 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfpd4" Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.700149 5119 scope.go:117] "RemoveContainer" containerID="c71f94f7d49a4ae66cfe5edbf36058341b926bdce9856a5945e867143b2edc30" Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.739151 5119 scope.go:117] "RemoveContainer" containerID="d172ff30020a71c96572026022906186230423b2bb626fa036b6b0698d2c689e" Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.764263 5119 scope.go:117] "RemoveContainer" containerID="c83c0485bf356c335e092331d135c6807c733bc04005396b400cb103a5a27c19" Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.845259 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-utilities\") pod \"e578d8a2-5c12-4d79-85c0-700763083130\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.846361 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5rcq\" (UniqueName: \"kubernetes.io/projected/e578d8a2-5c12-4d79-85c0-700763083130-kube-api-access-j5rcq\") pod \"e578d8a2-5c12-4d79-85c0-700763083130\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.846700 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-catalog-content\") pod \"e578d8a2-5c12-4d79-85c0-700763083130\" (UID: \"e578d8a2-5c12-4d79-85c0-700763083130\") " Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.847042 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-utilities" (OuterVolumeSpecName: "utilities") pod "e578d8a2-5c12-4d79-85c0-700763083130" (UID: "e578d8a2-5c12-4d79-85c0-700763083130"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.847418 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.851310 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e578d8a2-5c12-4d79-85c0-700763083130-kube-api-access-j5rcq" (OuterVolumeSpecName: "kube-api-access-j5rcq") pod "e578d8a2-5c12-4d79-85c0-700763083130" (UID: "e578d8a2-5c12-4d79-85c0-700763083130"). InnerVolumeSpecName "kube-api-access-j5rcq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.949812 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j5rcq\" (UniqueName: \"kubernetes.io/projected/e578d8a2-5c12-4d79-85c0-700763083130-kube-api-access-j5rcq\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:27 crc kubenswrapper[5119]: I0121 10:50:27.959949 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e578d8a2-5c12-4d79-85c0-700763083130" (UID: "e578d8a2-5c12-4d79-85c0-700763083130"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:50:28 crc kubenswrapper[5119]: I0121 10:50:28.032313 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lfpd4"] Jan 21 10:50:28 crc kubenswrapper[5119]: I0121 10:50:28.039259 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lfpd4"] Jan 21 10:50:28 crc kubenswrapper[5119]: I0121 10:50:28.051247 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e578d8a2-5c12-4d79-85c0-700763083130-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:28 crc kubenswrapper[5119]: I0121 10:50:28.592294 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:50:28 crc kubenswrapper[5119]: E0121 10:50:28.593346 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:50:28 crc kubenswrapper[5119]: I0121 10:50:28.601277 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e578d8a2-5c12-4d79-85c0-700763083130" path="/var/lib/kubelet/pods/e578d8a2-5c12-4d79-85c0-700763083130/volumes" Jan 21 10:50:42 crc kubenswrapper[5119]: I0121 10:50:42.591458 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:50:42 crc kubenswrapper[5119]: E0121 10:50:42.592283 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:50:53 crc kubenswrapper[5119]: I0121 10:50:53.503199 5119 scope.go:117] "RemoveContainer" containerID="3a32c29aa565ba10f984197905fb045fa914498b3398f5cae3798b21f3815abf" Jan 21 10:50:56 crc kubenswrapper[5119]: I0121 10:50:56.597327 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:50:56 crc kubenswrapper[5119]: E0121 10:50:56.598478 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:51:11 crc kubenswrapper[5119]: I0121 10:51:11.590895 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:51:11 crc kubenswrapper[5119]: E0121 10:51:11.591691 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:51:25 crc kubenswrapper[5119]: I0121 10:51:25.590817 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:51:25 crc kubenswrapper[5119]: E0121 10:51:25.591660 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:51:31 crc kubenswrapper[5119]: I0121 10:51:31.983946 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-9c9pc"] Jan 21 10:51:31 crc kubenswrapper[5119]: I0121 10:51:31.984863 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e578d8a2-5c12-4d79-85c0-700763083130" containerName="extract-content" Jan 21 10:51:31 crc kubenswrapper[5119]: I0121 10:51:31.984876 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e578d8a2-5c12-4d79-85c0-700763083130" containerName="extract-content" Jan 21 10:51:31 crc kubenswrapper[5119]: I0121 10:51:31.984885 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e578d8a2-5c12-4d79-85c0-700763083130" containerName="registry-server" Jan 21 10:51:31 crc kubenswrapper[5119]: I0121 10:51:31.984894 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e578d8a2-5c12-4d79-85c0-700763083130" containerName="registry-server" Jan 21 10:51:31 crc kubenswrapper[5119]: I0121 10:51:31.984921 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e578d8a2-5c12-4d79-85c0-700763083130" containerName="extract-utilities" Jan 21 10:51:31 crc kubenswrapper[5119]: I0121 10:51:31.984926 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e578d8a2-5c12-4d79-85c0-700763083130" containerName="extract-utilities" Jan 21 10:51:31 crc kubenswrapper[5119]: I0121 10:51:31.985054 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e578d8a2-5c12-4d79-85c0-700763083130" containerName="registry-server" Jan 21 10:51:31 crc kubenswrapper[5119]: I0121 10:51:31.998585 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-9c9pc"] Jan 21 10:51:31 crc kubenswrapper[5119]: I0121 10:51:31.998720 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9c9pc" Jan 21 10:51:32 crc kubenswrapper[5119]: I0121 10:51:32.085958 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l2d2\" (UniqueName: \"kubernetes.io/projected/7e1dd0d4-d3a6-4073-b069-7908cc37710d-kube-api-access-4l2d2\") pod \"infrawatch-operators-9c9pc\" (UID: \"7e1dd0d4-d3a6-4073-b069-7908cc37710d\") " pod="service-telemetry/infrawatch-operators-9c9pc" Jan 21 10:51:32 crc kubenswrapper[5119]: I0121 10:51:32.187795 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4l2d2\" (UniqueName: \"kubernetes.io/projected/7e1dd0d4-d3a6-4073-b069-7908cc37710d-kube-api-access-4l2d2\") pod \"infrawatch-operators-9c9pc\" (UID: \"7e1dd0d4-d3a6-4073-b069-7908cc37710d\") " pod="service-telemetry/infrawatch-operators-9c9pc" Jan 21 10:51:32 crc kubenswrapper[5119]: I0121 10:51:32.209190 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l2d2\" (UniqueName: \"kubernetes.io/projected/7e1dd0d4-d3a6-4073-b069-7908cc37710d-kube-api-access-4l2d2\") pod \"infrawatch-operators-9c9pc\" (UID: \"7e1dd0d4-d3a6-4073-b069-7908cc37710d\") " pod="service-telemetry/infrawatch-operators-9c9pc" Jan 21 10:51:32 crc kubenswrapper[5119]: I0121 10:51:32.323497 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9c9pc" Jan 21 10:51:32 crc kubenswrapper[5119]: I0121 10:51:32.510948 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-9c9pc"] Jan 21 10:51:32 crc kubenswrapper[5119]: I0121 10:51:32.517799 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:51:33 crc kubenswrapper[5119]: I0121 10:51:33.189098 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9c9pc" event={"ID":"7e1dd0d4-d3a6-4073-b069-7908cc37710d","Type":"ContainerStarted","Data":"1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082"} Jan 21 10:51:33 crc kubenswrapper[5119]: I0121 10:51:33.189386 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9c9pc" event={"ID":"7e1dd0d4-d3a6-4073-b069-7908cc37710d","Type":"ContainerStarted","Data":"10b58ec6c526f80453350529ad79e7350119982dceabae2d3599c236503c4506"} Jan 21 10:51:33 crc kubenswrapper[5119]: I0121 10:51:33.206941 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-9c9pc" podStartSLOduration=2.095441313 podStartE2EDuration="2.206922048s" podCreationTimestamp="2026-01-21 10:51:31 +0000 UTC" firstStartedPulling="2026-01-21 10:51:32.517982073 +0000 UTC m=+3408.186073751" lastFinishedPulling="2026-01-21 10:51:32.629462808 +0000 UTC m=+3408.297554486" observedRunningTime="2026-01-21 10:51:33.201045908 +0000 UTC m=+3408.869137606" watchObservedRunningTime="2026-01-21 10:51:33.206922048 +0000 UTC m=+3408.875013726" Jan 21 10:51:39 crc kubenswrapper[5119]: I0121 10:51:39.592332 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:51:39 crc kubenswrapper[5119]: E0121 10:51:39.592840 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:51:42 crc kubenswrapper[5119]: I0121 10:51:42.324221 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-9c9pc" Jan 21 10:51:42 crc kubenswrapper[5119]: I0121 10:51:42.324583 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-9c9pc" Jan 21 10:51:42 crc kubenswrapper[5119]: I0121 10:51:42.353025 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-9c9pc" Jan 21 10:51:43 crc kubenswrapper[5119]: I0121 10:51:43.287685 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-9c9pc" Jan 21 10:51:43 crc kubenswrapper[5119]: I0121 10:51:43.323867 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-9c9pc"] Jan 21 10:51:45 crc kubenswrapper[5119]: I0121 10:51:45.273557 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-9c9pc" podUID="7e1dd0d4-d3a6-4073-b069-7908cc37710d" containerName="registry-server" containerID="cri-o://1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082" gracePeriod=2 Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.139388 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9c9pc" Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.198303 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l2d2\" (UniqueName: \"kubernetes.io/projected/7e1dd0d4-d3a6-4073-b069-7908cc37710d-kube-api-access-4l2d2\") pod \"7e1dd0d4-d3a6-4073-b069-7908cc37710d\" (UID: \"7e1dd0d4-d3a6-4073-b069-7908cc37710d\") " Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.205209 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e1dd0d4-d3a6-4073-b069-7908cc37710d-kube-api-access-4l2d2" (OuterVolumeSpecName: "kube-api-access-4l2d2") pod "7e1dd0d4-d3a6-4073-b069-7908cc37710d" (UID: "7e1dd0d4-d3a6-4073-b069-7908cc37710d"). InnerVolumeSpecName "kube-api-access-4l2d2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.288814 5119 generic.go:358] "Generic (PLEG): container finished" podID="7e1dd0d4-d3a6-4073-b069-7908cc37710d" containerID="1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082" exitCode=0 Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.288861 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9c9pc" event={"ID":"7e1dd0d4-d3a6-4073-b069-7908cc37710d","Type":"ContainerDied","Data":"1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082"} Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.288920 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9c9pc" event={"ID":"7e1dd0d4-d3a6-4073-b069-7908cc37710d","Type":"ContainerDied","Data":"10b58ec6c526f80453350529ad79e7350119982dceabae2d3599c236503c4506"} Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.288924 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9c9pc" Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.288970 5119 scope.go:117] "RemoveContainer" containerID="1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082" Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.303063 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4l2d2\" (UniqueName: \"kubernetes.io/projected/7e1dd0d4-d3a6-4073-b069-7908cc37710d-kube-api-access-4l2d2\") on node \"crc\" DevicePath \"\"" Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.312393 5119 scope.go:117] "RemoveContainer" containerID="1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082" Jan 21 10:51:46 crc kubenswrapper[5119]: E0121 10:51:46.312959 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082\": container with ID starting with 1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082 not found: ID does not exist" containerID="1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082" Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.313002 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082"} err="failed to get container status \"1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082\": rpc error: code = NotFound desc = could not find container \"1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082\": container with ID starting with 1c350cab5db4e2cb2c8c1cac6d530f27c5cccc4c814bdbb74526e6b822ef9082 not found: ID does not exist" Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.331765 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-9c9pc"] Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.338425 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-9c9pc"] Jan 21 10:51:46 crc kubenswrapper[5119]: I0121 10:51:46.599061 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e1dd0d4-d3a6-4073-b069-7908cc37710d" path="/var/lib/kubelet/pods/7e1dd0d4-d3a6-4073-b069-7908cc37710d/volumes" Jan 21 10:51:51 crc kubenswrapper[5119]: I0121 10:51:51.590403 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:51:51 crc kubenswrapper[5119]: E0121 10:51:51.591162 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.131912 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483212-5fjkn"] Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.135543 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e1dd0d4-d3a6-4073-b069-7908cc37710d" containerName="registry-server" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.135583 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1dd0d4-d3a6-4073-b069-7908cc37710d" containerName="registry-server" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.135861 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="7e1dd0d4-d3a6-4073-b069-7908cc37710d" containerName="registry-server" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.163472 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483212-5fjkn"] Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.163635 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483212-5fjkn" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.168112 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.168324 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.168479 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.305831 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f8kz\" (UniqueName: \"kubernetes.io/projected/c906c18e-cb01-4a29-b300-542b4fe6140b-kube-api-access-8f8kz\") pod \"auto-csr-approver-29483212-5fjkn\" (UID: \"c906c18e-cb01-4a29-b300-542b4fe6140b\") " pod="openshift-infra/auto-csr-approver-29483212-5fjkn" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.407234 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8f8kz\" (UniqueName: \"kubernetes.io/projected/c906c18e-cb01-4a29-b300-542b4fe6140b-kube-api-access-8f8kz\") pod \"auto-csr-approver-29483212-5fjkn\" (UID: \"c906c18e-cb01-4a29-b300-542b4fe6140b\") " pod="openshift-infra/auto-csr-approver-29483212-5fjkn" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.425709 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f8kz\" (UniqueName: \"kubernetes.io/projected/c906c18e-cb01-4a29-b300-542b4fe6140b-kube-api-access-8f8kz\") pod \"auto-csr-approver-29483212-5fjkn\" (UID: \"c906c18e-cb01-4a29-b300-542b4fe6140b\") " pod="openshift-infra/auto-csr-approver-29483212-5fjkn" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.483771 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483212-5fjkn" Jan 21 10:52:00 crc kubenswrapper[5119]: I0121 10:52:00.712460 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483212-5fjkn"] Jan 21 10:52:01 crc kubenswrapper[5119]: I0121 10:52:01.403001 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483212-5fjkn" event={"ID":"c906c18e-cb01-4a29-b300-542b4fe6140b","Type":"ContainerStarted","Data":"efd9f5748c34544e316426f5ea5f49f4cc9201b8425381d2974dc8b383449f6e"} Jan 21 10:52:02 crc kubenswrapper[5119]: I0121 10:52:02.412926 5119 generic.go:358] "Generic (PLEG): container finished" podID="c906c18e-cb01-4a29-b300-542b4fe6140b" containerID="f33f5ad8aa4c1ef657dac19b1dbbd675efa15e6dba2070f1b3ecfd10277ce9b4" exitCode=0 Jan 21 10:52:02 crc kubenswrapper[5119]: I0121 10:52:02.412984 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483212-5fjkn" event={"ID":"c906c18e-cb01-4a29-b300-542b4fe6140b","Type":"ContainerDied","Data":"f33f5ad8aa4c1ef657dac19b1dbbd675efa15e6dba2070f1b3ecfd10277ce9b4"} Jan 21 10:52:02 crc kubenswrapper[5119]: I0121 10:52:02.592126 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:52:02 crc kubenswrapper[5119]: E0121 10:52:02.592703 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:52:03 crc kubenswrapper[5119]: I0121 10:52:03.646114 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483212-5fjkn" Jan 21 10:52:03 crc kubenswrapper[5119]: I0121 10:52:03.770700 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f8kz\" (UniqueName: \"kubernetes.io/projected/c906c18e-cb01-4a29-b300-542b4fe6140b-kube-api-access-8f8kz\") pod \"c906c18e-cb01-4a29-b300-542b4fe6140b\" (UID: \"c906c18e-cb01-4a29-b300-542b4fe6140b\") " Jan 21 10:52:03 crc kubenswrapper[5119]: I0121 10:52:03.775456 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c906c18e-cb01-4a29-b300-542b4fe6140b-kube-api-access-8f8kz" (OuterVolumeSpecName: "kube-api-access-8f8kz") pod "c906c18e-cb01-4a29-b300-542b4fe6140b" (UID: "c906c18e-cb01-4a29-b300-542b4fe6140b"). InnerVolumeSpecName "kube-api-access-8f8kz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:52:03 crc kubenswrapper[5119]: I0121 10:52:03.872761 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8f8kz\" (UniqueName: \"kubernetes.io/projected/c906c18e-cb01-4a29-b300-542b4fe6140b-kube-api-access-8f8kz\") on node \"crc\" DevicePath \"\"" Jan 21 10:52:04 crc kubenswrapper[5119]: I0121 10:52:04.427765 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483212-5fjkn" Jan 21 10:52:04 crc kubenswrapper[5119]: I0121 10:52:04.427819 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483212-5fjkn" event={"ID":"c906c18e-cb01-4a29-b300-542b4fe6140b","Type":"ContainerDied","Data":"efd9f5748c34544e316426f5ea5f49f4cc9201b8425381d2974dc8b383449f6e"} Jan 21 10:52:04 crc kubenswrapper[5119]: I0121 10:52:04.428203 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efd9f5748c34544e316426f5ea5f49f4cc9201b8425381d2974dc8b383449f6e" Jan 21 10:52:04 crc kubenswrapper[5119]: I0121 10:52:04.707077 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483206-ztzsq"] Jan 21 10:52:04 crc kubenswrapper[5119]: I0121 10:52:04.712139 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483206-ztzsq"] Jan 21 10:52:06 crc kubenswrapper[5119]: I0121 10:52:06.600945 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b" path="/var/lib/kubelet/pods/5b0dc651-91c8-43c9-99f4-b1ddc64cbc9b/volumes" Jan 21 10:52:14 crc kubenswrapper[5119]: I0121 10:52:14.597435 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:52:14 crc kubenswrapper[5119]: E0121 10:52:14.598195 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:52:26 crc kubenswrapper[5119]: I0121 10:52:26.591289 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:52:26 crc kubenswrapper[5119]: E0121 10:52:26.592182 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:52:39 crc kubenswrapper[5119]: I0121 10:52:39.590742 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:52:39 crc kubenswrapper[5119]: E0121 10:52:39.591689 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:52:52 crc kubenswrapper[5119]: I0121 10:52:52.590735 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:52:52 crc kubenswrapper[5119]: E0121 10:52:52.591365 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:52:53 crc kubenswrapper[5119]: I0121 10:52:53.662253 5119 scope.go:117] "RemoveContainer" containerID="541f6eb23808a3d0ebeb18c0e30e89a0e6ddb593bb15918d2e4c61a16cba45ef" Jan 21 10:53:05 crc kubenswrapper[5119]: I0121 10:53:05.591102 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:53:05 crc kubenswrapper[5119]: E0121 10:53:05.591809 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 10:53:20 crc kubenswrapper[5119]: I0121 10:53:20.595724 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:53:21 crc kubenswrapper[5119]: I0121 10:53:21.334638 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"6c6c6443973f84ae68f595b28489b4574cb199d66372a1d02488d64455df6eb3"} Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.136081 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483214-p6qcs"] Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.137446 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c906c18e-cb01-4a29-b300-542b4fe6140b" containerName="oc" Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.137461 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c906c18e-cb01-4a29-b300-542b4fe6140b" containerName="oc" Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.137596 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="c906c18e-cb01-4a29-b300-542b4fe6140b" containerName="oc" Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.147243 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483214-p6qcs"] Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.147363 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483214-p6qcs" Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.149735 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.150145 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.153159 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.203711 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rft9\" (UniqueName: \"kubernetes.io/projected/e275e5e9-b485-42da-be23-5bc0fa02d065-kube-api-access-4rft9\") pod \"auto-csr-approver-29483214-p6qcs\" (UID: \"e275e5e9-b485-42da-be23-5bc0fa02d065\") " pod="openshift-infra/auto-csr-approver-29483214-p6qcs" Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.305419 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4rft9\" (UniqueName: \"kubernetes.io/projected/e275e5e9-b485-42da-be23-5bc0fa02d065-kube-api-access-4rft9\") pod \"auto-csr-approver-29483214-p6qcs\" (UID: \"e275e5e9-b485-42da-be23-5bc0fa02d065\") " pod="openshift-infra/auto-csr-approver-29483214-p6qcs" Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.344630 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rft9\" (UniqueName: \"kubernetes.io/projected/e275e5e9-b485-42da-be23-5bc0fa02d065-kube-api-access-4rft9\") pod \"auto-csr-approver-29483214-p6qcs\" (UID: \"e275e5e9-b485-42da-be23-5bc0fa02d065\") " pod="openshift-infra/auto-csr-approver-29483214-p6qcs" Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.477230 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483214-p6qcs" Jan 21 10:54:00 crc kubenswrapper[5119]: I0121 10:54:00.722697 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483214-p6qcs"] Jan 21 10:54:01 crc kubenswrapper[5119]: I0121 10:54:01.673331 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483214-p6qcs" event={"ID":"e275e5e9-b485-42da-be23-5bc0fa02d065","Type":"ContainerStarted","Data":"6cf86bab3a5e15bbed262da4ce5b18eedcf785d35489c61b117bf6a448535887"} Jan 21 10:54:02 crc kubenswrapper[5119]: I0121 10:54:02.686050 5119 generic.go:358] "Generic (PLEG): container finished" podID="e275e5e9-b485-42da-be23-5bc0fa02d065" containerID="ee7750be22e32d41af7c431d9997ebd1f86bd2391c9d64cbd31557e69aef8ed0" exitCode=0 Jan 21 10:54:02 crc kubenswrapper[5119]: I0121 10:54:02.686311 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483214-p6qcs" event={"ID":"e275e5e9-b485-42da-be23-5bc0fa02d065","Type":"ContainerDied","Data":"ee7750be22e32d41af7c431d9997ebd1f86bd2391c9d64cbd31557e69aef8ed0"} Jan 21 10:54:03 crc kubenswrapper[5119]: I0121 10:54:03.918327 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483214-p6qcs" Jan 21 10:54:04 crc kubenswrapper[5119]: I0121 10:54:04.076252 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rft9\" (UniqueName: \"kubernetes.io/projected/e275e5e9-b485-42da-be23-5bc0fa02d065-kube-api-access-4rft9\") pod \"e275e5e9-b485-42da-be23-5bc0fa02d065\" (UID: \"e275e5e9-b485-42da-be23-5bc0fa02d065\") " Jan 21 10:54:04 crc kubenswrapper[5119]: I0121 10:54:04.099820 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e275e5e9-b485-42da-be23-5bc0fa02d065-kube-api-access-4rft9" (OuterVolumeSpecName: "kube-api-access-4rft9") pod "e275e5e9-b485-42da-be23-5bc0fa02d065" (UID: "e275e5e9-b485-42da-be23-5bc0fa02d065"). InnerVolumeSpecName "kube-api-access-4rft9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:54:04 crc kubenswrapper[5119]: I0121 10:54:04.178159 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4rft9\" (UniqueName: \"kubernetes.io/projected/e275e5e9-b485-42da-be23-5bc0fa02d065-kube-api-access-4rft9\") on node \"crc\" DevicePath \"\"" Jan 21 10:54:04 crc kubenswrapper[5119]: I0121 10:54:04.705277 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483214-p6qcs" event={"ID":"e275e5e9-b485-42da-be23-5bc0fa02d065","Type":"ContainerDied","Data":"6cf86bab3a5e15bbed262da4ce5b18eedcf785d35489c61b117bf6a448535887"} Jan 21 10:54:04 crc kubenswrapper[5119]: I0121 10:54:04.705623 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cf86bab3a5e15bbed262da4ce5b18eedcf785d35489c61b117bf6a448535887" Jan 21 10:54:04 crc kubenswrapper[5119]: I0121 10:54:04.705327 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483214-p6qcs" Jan 21 10:54:04 crc kubenswrapper[5119]: I0121 10:54:04.988761 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483208-pq4pr"] Jan 21 10:54:04 crc kubenswrapper[5119]: I0121 10:54:04.993715 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483208-pq4pr"] Jan 21 10:54:06 crc kubenswrapper[5119]: I0121 10:54:06.598094 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b7059f3-9404-412b-be12-e158a9c9c5f9" path="/var/lib/kubelet/pods/8b7059f3-9404-412b-be12-e158a9c9c5f9/volumes" Jan 21 10:54:46 crc kubenswrapper[5119]: I0121 10:54:46.833593 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:54:46 crc kubenswrapper[5119]: I0121 10:54:46.836636 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:54:46 crc kubenswrapper[5119]: I0121 10:54:46.838949 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:54:46 crc kubenswrapper[5119]: I0121 10:54:46.841089 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:54:53 crc kubenswrapper[5119]: I0121 10:54:53.800448 5119 scope.go:117] "RemoveContainer" containerID="b4537ad2705cf1bdc3d3096b31c3ddd2c9ebec03c36692e643ab0cdca3111ee8" Jan 21 10:55:49 crc kubenswrapper[5119]: I0121 10:55:49.918450 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:55:49 crc kubenswrapper[5119]: I0121 10:55:49.920362 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.132318 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483216-dncs5"] Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.133376 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e275e5e9-b485-42da-be23-5bc0fa02d065" containerName="oc" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.133389 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e275e5e9-b485-42da-be23-5bc0fa02d065" containerName="oc" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.133532 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e275e5e9-b485-42da-be23-5bc0fa02d065" containerName="oc" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.147892 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483216-dncs5" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.149826 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.151898 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.151950 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.154744 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483216-dncs5"] Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.191579 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k65qj\" (UniqueName: \"kubernetes.io/projected/9c888c8c-4960-4b7d-a7d3-122c21b7bf09-kube-api-access-k65qj\") pod \"auto-csr-approver-29483216-dncs5\" (UID: \"9c888c8c-4960-4b7d-a7d3-122c21b7bf09\") " pod="openshift-infra/auto-csr-approver-29483216-dncs5" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.293256 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k65qj\" (UniqueName: \"kubernetes.io/projected/9c888c8c-4960-4b7d-a7d3-122c21b7bf09-kube-api-access-k65qj\") pod \"auto-csr-approver-29483216-dncs5\" (UID: \"9c888c8c-4960-4b7d-a7d3-122c21b7bf09\") " pod="openshift-infra/auto-csr-approver-29483216-dncs5" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.313590 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k65qj\" (UniqueName: \"kubernetes.io/projected/9c888c8c-4960-4b7d-a7d3-122c21b7bf09-kube-api-access-k65qj\") pod \"auto-csr-approver-29483216-dncs5\" (UID: \"9c888c8c-4960-4b7d-a7d3-122c21b7bf09\") " pod="openshift-infra/auto-csr-approver-29483216-dncs5" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.473291 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483216-dncs5" Jan 21 10:56:00 crc kubenswrapper[5119]: I0121 10:56:00.710492 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483216-dncs5"] Jan 21 10:56:01 crc kubenswrapper[5119]: I0121 10:56:01.656582 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483216-dncs5" event={"ID":"9c888c8c-4960-4b7d-a7d3-122c21b7bf09","Type":"ContainerStarted","Data":"c4631c61af40ad5b0437890af9d65d7a5bd211b7e086cfa221d5d347d769b8c4"} Jan 21 10:56:03 crc kubenswrapper[5119]: I0121 10:56:03.690303 5119 generic.go:358] "Generic (PLEG): container finished" podID="9c888c8c-4960-4b7d-a7d3-122c21b7bf09" containerID="2949b0132fa607130ab58dfcfec2e4fb35ec6b08ee3c7a4dafa9cdef18898aee" exitCode=0 Jan 21 10:56:03 crc kubenswrapper[5119]: I0121 10:56:03.690790 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483216-dncs5" event={"ID":"9c888c8c-4960-4b7d-a7d3-122c21b7bf09","Type":"ContainerDied","Data":"2949b0132fa607130ab58dfcfec2e4fb35ec6b08ee3c7a4dafa9cdef18898aee"} Jan 21 10:56:04 crc kubenswrapper[5119]: I0121 10:56:04.959198 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483216-dncs5" Jan 21 10:56:05 crc kubenswrapper[5119]: I0121 10:56:05.075333 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k65qj\" (UniqueName: \"kubernetes.io/projected/9c888c8c-4960-4b7d-a7d3-122c21b7bf09-kube-api-access-k65qj\") pod \"9c888c8c-4960-4b7d-a7d3-122c21b7bf09\" (UID: \"9c888c8c-4960-4b7d-a7d3-122c21b7bf09\") " Jan 21 10:56:05 crc kubenswrapper[5119]: I0121 10:56:05.081421 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c888c8c-4960-4b7d-a7d3-122c21b7bf09-kube-api-access-k65qj" (OuterVolumeSpecName: "kube-api-access-k65qj") pod "9c888c8c-4960-4b7d-a7d3-122c21b7bf09" (UID: "9c888c8c-4960-4b7d-a7d3-122c21b7bf09"). InnerVolumeSpecName "kube-api-access-k65qj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:56:05 crc kubenswrapper[5119]: I0121 10:56:05.177886 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k65qj\" (UniqueName: \"kubernetes.io/projected/9c888c8c-4960-4b7d-a7d3-122c21b7bf09-kube-api-access-k65qj\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:05 crc kubenswrapper[5119]: I0121 10:56:05.725274 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483216-dncs5" event={"ID":"9c888c8c-4960-4b7d-a7d3-122c21b7bf09","Type":"ContainerDied","Data":"c4631c61af40ad5b0437890af9d65d7a5bd211b7e086cfa221d5d347d769b8c4"} Jan 21 10:56:05 crc kubenswrapper[5119]: I0121 10:56:05.725313 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4631c61af40ad5b0437890af9d65d7a5bd211b7e086cfa221d5d347d769b8c4" Jan 21 10:56:05 crc kubenswrapper[5119]: I0121 10:56:05.725368 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483216-dncs5" Jan 21 10:56:06 crc kubenswrapper[5119]: I0121 10:56:06.014562 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483210-bbltn"] Jan 21 10:56:06 crc kubenswrapper[5119]: I0121 10:56:06.019616 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483210-bbltn"] Jan 21 10:56:06 crc kubenswrapper[5119]: I0121 10:56:06.599509 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8b3da1-c781-4d0d-9233-b27658d65749" path="/var/lib/kubelet/pods/7f8b3da1-c781-4d0d-9233-b27658d65749/volumes" Jan 21 10:56:19 crc kubenswrapper[5119]: I0121 10:56:19.918585 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:56:19 crc kubenswrapper[5119]: I0121 10:56:19.919220 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:56:49 crc kubenswrapper[5119]: I0121 10:56:49.919346 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:56:49 crc kubenswrapper[5119]: I0121 10:56:49.919991 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:56:49 crc kubenswrapper[5119]: I0121 10:56:49.920089 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 10:56:49 crc kubenswrapper[5119]: I0121 10:56:49.921013 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c6c6443973f84ae68f595b28489b4574cb199d66372a1d02488d64455df6eb3"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:56:49 crc kubenswrapper[5119]: I0121 10:56:49.921128 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://6c6c6443973f84ae68f595b28489b4574cb199d66372a1d02488d64455df6eb3" gracePeriod=600 Jan 21 10:56:50 crc kubenswrapper[5119]: I0121 10:56:50.061331 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:56:50 crc kubenswrapper[5119]: I0121 10:56:50.100707 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="6c6c6443973f84ae68f595b28489b4574cb199d66372a1d02488d64455df6eb3" exitCode=0 Jan 21 10:56:50 crc kubenswrapper[5119]: I0121 10:56:50.100837 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"6c6c6443973f84ae68f595b28489b4574cb199d66372a1d02488d64455df6eb3"} Jan 21 10:56:50 crc kubenswrapper[5119]: I0121 10:56:50.101045 5119 scope.go:117] "RemoveContainer" containerID="a521ed658fa71bfaf54a98ad7b4204bf0ba78d4dc0e831fed9b773d9403d0b7f" Jan 21 10:56:51 crc kubenswrapper[5119]: I0121 10:56:51.109431 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a"} Jan 21 10:56:53 crc kubenswrapper[5119]: I0121 10:56:53.927802 5119 scope.go:117] "RemoveContainer" containerID="a6bdbb55faaac5e11e930855f391b6906a9360e06d3ffb2abee9fefb8df736b2" Jan 21 10:57:26 crc kubenswrapper[5119]: I0121 10:57:26.174857 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-bqx8m"] Jan 21 10:57:26 crc kubenswrapper[5119]: I0121 10:57:26.177482 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c888c8c-4960-4b7d-a7d3-122c21b7bf09" containerName="oc" Jan 21 10:57:26 crc kubenswrapper[5119]: I0121 10:57:26.177601 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c888c8c-4960-4b7d-a7d3-122c21b7bf09" containerName="oc" Jan 21 10:57:26 crc kubenswrapper[5119]: I0121 10:57:26.180617 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="9c888c8c-4960-4b7d-a7d3-122c21b7bf09" containerName="oc" Jan 21 10:57:27 crc kubenswrapper[5119]: I0121 10:57:27.283209 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-bqx8m"] Jan 21 10:57:27 crc kubenswrapper[5119]: I0121 10:57:27.283480 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-bqx8m" Jan 21 10:57:27 crc kubenswrapper[5119]: I0121 10:57:27.360595 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mb8t\" (UniqueName: \"kubernetes.io/projected/6d62b474-eb4a-40f0-a8b3-f53f096d7551-kube-api-access-7mb8t\") pod \"infrawatch-operators-bqx8m\" (UID: \"6d62b474-eb4a-40f0-a8b3-f53f096d7551\") " pod="service-telemetry/infrawatch-operators-bqx8m" Jan 21 10:57:27 crc kubenswrapper[5119]: I0121 10:57:27.462583 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7mb8t\" (UniqueName: \"kubernetes.io/projected/6d62b474-eb4a-40f0-a8b3-f53f096d7551-kube-api-access-7mb8t\") pod \"infrawatch-operators-bqx8m\" (UID: \"6d62b474-eb4a-40f0-a8b3-f53f096d7551\") " pod="service-telemetry/infrawatch-operators-bqx8m" Jan 21 10:57:27 crc kubenswrapper[5119]: I0121 10:57:27.503759 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mb8t\" (UniqueName: \"kubernetes.io/projected/6d62b474-eb4a-40f0-a8b3-f53f096d7551-kube-api-access-7mb8t\") pod \"infrawatch-operators-bqx8m\" (UID: \"6d62b474-eb4a-40f0-a8b3-f53f096d7551\") " pod="service-telemetry/infrawatch-operators-bqx8m" Jan 21 10:57:27 crc kubenswrapper[5119]: I0121 10:57:27.606556 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-bqx8m" Jan 21 10:57:27 crc kubenswrapper[5119]: I0121 10:57:27.809664 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-bqx8m"] Jan 21 10:57:28 crc kubenswrapper[5119]: I0121 10:57:28.380141 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-bqx8m" event={"ID":"6d62b474-eb4a-40f0-a8b3-f53f096d7551","Type":"ContainerStarted","Data":"2a0f735d52405b22b11deff356d15e44301abd021ea3e96424204b53b8643693"} Jan 21 10:57:28 crc kubenswrapper[5119]: I0121 10:57:28.380446 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-bqx8m" event={"ID":"6d62b474-eb4a-40f0-a8b3-f53f096d7551","Type":"ContainerStarted","Data":"f5f07ea7608050a7be1434174092e12d621427ff70a9852359b776fb1652b733"} Jan 21 10:57:28 crc kubenswrapper[5119]: I0121 10:57:28.398447 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-bqx8m" podStartSLOduration=2.281711786 podStartE2EDuration="2.398429733s" podCreationTimestamp="2026-01-21 10:57:26 +0000 UTC" firstStartedPulling="2026-01-21 10:57:27.817792207 +0000 UTC m=+3763.485883895" lastFinishedPulling="2026-01-21 10:57:27.934510164 +0000 UTC m=+3763.602601842" observedRunningTime="2026-01-21 10:57:28.394967079 +0000 UTC m=+3764.063058787" watchObservedRunningTime="2026-01-21 10:57:28.398429733 +0000 UTC m=+3764.066521411" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.549197 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xxfj5"] Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.763837 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xxfj5"] Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.764893 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-bqx8m" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.764113 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.765144 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-bqx8m" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.765834 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-bqx8m" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.802346 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-bqx8m" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.806972 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-catalog-content\") pod \"certified-operators-xxfj5\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.807039 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-utilities\") pod \"certified-operators-xxfj5\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.807238 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4v6k\" (UniqueName: \"kubernetes.io/projected/785b5e03-10ad-4dcb-918d-88f56d0086a1-kube-api-access-n4v6k\") pod \"certified-operators-xxfj5\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.908288 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-catalog-content\") pod \"certified-operators-xxfj5\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.908780 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-catalog-content\") pod \"certified-operators-xxfj5\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.908852 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-utilities\") pod \"certified-operators-xxfj5\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.908924 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4v6k\" (UniqueName: \"kubernetes.io/projected/785b5e03-10ad-4dcb-918d-88f56d0086a1-kube-api-access-n4v6k\") pod \"certified-operators-xxfj5\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.909165 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-utilities\") pod \"certified-operators-xxfj5\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:37 crc kubenswrapper[5119]: I0121 10:57:37.949099 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4v6k\" (UniqueName: \"kubernetes.io/projected/785b5e03-10ad-4dcb-918d-88f56d0086a1-kube-api-access-n4v6k\") pod \"certified-operators-xxfj5\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:38 crc kubenswrapper[5119]: I0121 10:57:38.100063 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:38 crc kubenswrapper[5119]: I0121 10:57:38.317162 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xxfj5"] Jan 21 10:57:38 crc kubenswrapper[5119]: I0121 10:57:38.452326 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxfj5" event={"ID":"785b5e03-10ad-4dcb-918d-88f56d0086a1","Type":"ContainerStarted","Data":"221bd70f779be6632a7764c3aa106dca27614c9696e019bba9428b27ea9b444e"} Jan 21 10:57:39 crc kubenswrapper[5119]: I0121 10:57:39.457967 5119 generic.go:358] "Generic (PLEG): container finished" podID="785b5e03-10ad-4dcb-918d-88f56d0086a1" containerID="51b0424e35a10558b153cbbacd2320bd08109407435ff08138f8c887292ef960" exitCode=0 Jan 21 10:57:39 crc kubenswrapper[5119]: I0121 10:57:39.458739 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxfj5" event={"ID":"785b5e03-10ad-4dcb-918d-88f56d0086a1","Type":"ContainerDied","Data":"51b0424e35a10558b153cbbacd2320bd08109407435ff08138f8c887292ef960"} Jan 21 10:57:40 crc kubenswrapper[5119]: I0121 10:57:40.466766 5119 generic.go:358] "Generic (PLEG): container finished" podID="785b5e03-10ad-4dcb-918d-88f56d0086a1" containerID="44c08f2e528d133936416b910b184ada9900a2c2b1283a3c7d6cf349c9566a2a" exitCode=0 Jan 21 10:57:40 crc kubenswrapper[5119]: I0121 10:57:40.466952 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxfj5" event={"ID":"785b5e03-10ad-4dcb-918d-88f56d0086a1","Type":"ContainerDied","Data":"44c08f2e528d133936416b910b184ada9900a2c2b1283a3c7d6cf349c9566a2a"} Jan 21 10:57:41 crc kubenswrapper[5119]: I0121 10:57:41.486762 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxfj5" event={"ID":"785b5e03-10ad-4dcb-918d-88f56d0086a1","Type":"ContainerStarted","Data":"fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79"} Jan 21 10:57:41 crc kubenswrapper[5119]: I0121 10:57:41.505270 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xxfj5" podStartSLOduration=3.896602579 podStartE2EDuration="4.505253438s" podCreationTimestamp="2026-01-21 10:57:37 +0000 UTC" firstStartedPulling="2026-01-21 10:57:39.459400805 +0000 UTC m=+3775.127492473" lastFinishedPulling="2026-01-21 10:57:40.068051654 +0000 UTC m=+3775.736143332" observedRunningTime="2026-01-21 10:57:41.504201039 +0000 UTC m=+3777.172292717" watchObservedRunningTime="2026-01-21 10:57:41.505253438 +0000 UTC m=+3777.173345116" Jan 21 10:57:42 crc kubenswrapper[5119]: I0121 10:57:42.343936 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-bqx8m"] Jan 21 10:57:42 crc kubenswrapper[5119]: I0121 10:57:42.344431 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-bqx8m" podUID="6d62b474-eb4a-40f0-a8b3-f53f096d7551" containerName="registry-server" containerID="cri-o://2a0f735d52405b22b11deff356d15e44301abd021ea3e96424204b53b8643693" gracePeriod=2 Jan 21 10:57:42 crc kubenswrapper[5119]: I0121 10:57:42.498186 5119 generic.go:358] "Generic (PLEG): container finished" podID="6d62b474-eb4a-40f0-a8b3-f53f096d7551" containerID="2a0f735d52405b22b11deff356d15e44301abd021ea3e96424204b53b8643693" exitCode=0 Jan 21 10:57:42 crc kubenswrapper[5119]: I0121 10:57:42.498880 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-bqx8m" event={"ID":"6d62b474-eb4a-40f0-a8b3-f53f096d7551","Type":"ContainerDied","Data":"2a0f735d52405b22b11deff356d15e44301abd021ea3e96424204b53b8643693"} Jan 21 10:57:42 crc kubenswrapper[5119]: I0121 10:57:42.694297 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-bqx8m" Jan 21 10:57:42 crc kubenswrapper[5119]: I0121 10:57:42.782936 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mb8t\" (UniqueName: \"kubernetes.io/projected/6d62b474-eb4a-40f0-a8b3-f53f096d7551-kube-api-access-7mb8t\") pod \"6d62b474-eb4a-40f0-a8b3-f53f096d7551\" (UID: \"6d62b474-eb4a-40f0-a8b3-f53f096d7551\") " Jan 21 10:57:42 crc kubenswrapper[5119]: I0121 10:57:42.789132 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d62b474-eb4a-40f0-a8b3-f53f096d7551-kube-api-access-7mb8t" (OuterVolumeSpecName: "kube-api-access-7mb8t") pod "6d62b474-eb4a-40f0-a8b3-f53f096d7551" (UID: "6d62b474-eb4a-40f0-a8b3-f53f096d7551"). InnerVolumeSpecName "kube-api-access-7mb8t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:57:42 crc kubenswrapper[5119]: I0121 10:57:42.885065 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7mb8t\" (UniqueName: \"kubernetes.io/projected/6d62b474-eb4a-40f0-a8b3-f53f096d7551-kube-api-access-7mb8t\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[5119]: I0121 10:57:43.513670 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-bqx8m" Jan 21 10:57:43 crc kubenswrapper[5119]: I0121 10:57:43.513693 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-bqx8m" event={"ID":"6d62b474-eb4a-40f0-a8b3-f53f096d7551","Type":"ContainerDied","Data":"f5f07ea7608050a7be1434174092e12d621427ff70a9852359b776fb1652b733"} Jan 21 10:57:43 crc kubenswrapper[5119]: I0121 10:57:43.513746 5119 scope.go:117] "RemoveContainer" containerID="2a0f735d52405b22b11deff356d15e44301abd021ea3e96424204b53b8643693" Jan 21 10:57:43 crc kubenswrapper[5119]: I0121 10:57:43.545691 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-bqx8m"] Jan 21 10:57:43 crc kubenswrapper[5119]: I0121 10:57:43.552866 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-bqx8m"] Jan 21 10:57:44 crc kubenswrapper[5119]: I0121 10:57:44.599506 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d62b474-eb4a-40f0-a8b3-f53f096d7551" path="/var/lib/kubelet/pods/6d62b474-eb4a-40f0-a8b3-f53f096d7551/volumes" Jan 21 10:57:48 crc kubenswrapper[5119]: I0121 10:57:48.101715 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:48 crc kubenswrapper[5119]: I0121 10:57:48.103039 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:48 crc kubenswrapper[5119]: I0121 10:57:48.152495 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:48 crc kubenswrapper[5119]: I0121 10:57:48.607238 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:49 crc kubenswrapper[5119]: I0121 10:57:49.741537 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xxfj5"] Jan 21 10:57:51 crc kubenswrapper[5119]: I0121 10:57:51.570882 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xxfj5" podUID="785b5e03-10ad-4dcb-918d-88f56d0086a1" containerName="registry-server" containerID="cri-o://fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79" gracePeriod=2 Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.469747 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.581512 5119 generic.go:358] "Generic (PLEG): container finished" podID="785b5e03-10ad-4dcb-918d-88f56d0086a1" containerID="fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79" exitCode=0 Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.581559 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxfj5" event={"ID":"785b5e03-10ad-4dcb-918d-88f56d0086a1","Type":"ContainerDied","Data":"fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79"} Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.581592 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxfj5" event={"ID":"785b5e03-10ad-4dcb-918d-88f56d0086a1","Type":"ContainerDied","Data":"221bd70f779be6632a7764c3aa106dca27614c9696e019bba9428b27ea9b444e"} Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.581625 5119 scope.go:117] "RemoveContainer" containerID="fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.581813 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxfj5" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.611284 5119 scope.go:117] "RemoveContainer" containerID="44c08f2e528d133936416b910b184ada9900a2c2b1283a3c7d6cf349c9566a2a" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.631568 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-utilities\") pod \"785b5e03-10ad-4dcb-918d-88f56d0086a1\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.631995 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-catalog-content\") pod \"785b5e03-10ad-4dcb-918d-88f56d0086a1\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.632201 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4v6k\" (UniqueName: \"kubernetes.io/projected/785b5e03-10ad-4dcb-918d-88f56d0086a1-kube-api-access-n4v6k\") pod \"785b5e03-10ad-4dcb-918d-88f56d0086a1\" (UID: \"785b5e03-10ad-4dcb-918d-88f56d0086a1\") " Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.633258 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-utilities" (OuterVolumeSpecName: "utilities") pod "785b5e03-10ad-4dcb-918d-88f56d0086a1" (UID: "785b5e03-10ad-4dcb-918d-88f56d0086a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.637908 5119 scope.go:117] "RemoveContainer" containerID="51b0424e35a10558b153cbbacd2320bd08109407435ff08138f8c887292ef960" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.640820 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785b5e03-10ad-4dcb-918d-88f56d0086a1-kube-api-access-n4v6k" (OuterVolumeSpecName: "kube-api-access-n4v6k") pod "785b5e03-10ad-4dcb-918d-88f56d0086a1" (UID: "785b5e03-10ad-4dcb-918d-88f56d0086a1"). InnerVolumeSpecName "kube-api-access-n4v6k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.662240 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "785b5e03-10ad-4dcb-918d-88f56d0086a1" (UID: "785b5e03-10ad-4dcb-918d-88f56d0086a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.684712 5119 scope.go:117] "RemoveContainer" containerID="fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79" Jan 21 10:57:52 crc kubenswrapper[5119]: E0121 10:57:52.686190 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79\": container with ID starting with fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79 not found: ID does not exist" containerID="fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.686245 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79"} err="failed to get container status \"fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79\": rpc error: code = NotFound desc = could not find container \"fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79\": container with ID starting with fcbaa8da6882d47593289dffd5065beb8abc738b5c8f43a15a9656a9d1072c79 not found: ID does not exist" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.686285 5119 scope.go:117] "RemoveContainer" containerID="44c08f2e528d133936416b910b184ada9900a2c2b1283a3c7d6cf349c9566a2a" Jan 21 10:57:52 crc kubenswrapper[5119]: E0121 10:57:52.686527 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c08f2e528d133936416b910b184ada9900a2c2b1283a3c7d6cf349c9566a2a\": container with ID starting with 44c08f2e528d133936416b910b184ada9900a2c2b1283a3c7d6cf349c9566a2a not found: ID does not exist" containerID="44c08f2e528d133936416b910b184ada9900a2c2b1283a3c7d6cf349c9566a2a" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.686580 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c08f2e528d133936416b910b184ada9900a2c2b1283a3c7d6cf349c9566a2a"} err="failed to get container status \"44c08f2e528d133936416b910b184ada9900a2c2b1283a3c7d6cf349c9566a2a\": rpc error: code = NotFound desc = could not find container \"44c08f2e528d133936416b910b184ada9900a2c2b1283a3c7d6cf349c9566a2a\": container with ID starting with 44c08f2e528d133936416b910b184ada9900a2c2b1283a3c7d6cf349c9566a2a not found: ID does not exist" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.686616 5119 scope.go:117] "RemoveContainer" containerID="51b0424e35a10558b153cbbacd2320bd08109407435ff08138f8c887292ef960" Jan 21 10:57:52 crc kubenswrapper[5119]: E0121 10:57:52.687623 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b0424e35a10558b153cbbacd2320bd08109407435ff08138f8c887292ef960\": container with ID starting with 51b0424e35a10558b153cbbacd2320bd08109407435ff08138f8c887292ef960 not found: ID does not exist" containerID="51b0424e35a10558b153cbbacd2320bd08109407435ff08138f8c887292ef960" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.687659 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b0424e35a10558b153cbbacd2320bd08109407435ff08138f8c887292ef960"} err="failed to get container status \"51b0424e35a10558b153cbbacd2320bd08109407435ff08138f8c887292ef960\": rpc error: code = NotFound desc = could not find container \"51b0424e35a10558b153cbbacd2320bd08109407435ff08138f8c887292ef960\": container with ID starting with 51b0424e35a10558b153cbbacd2320bd08109407435ff08138f8c887292ef960 not found: ID does not exist" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.734580 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n4v6k\" (UniqueName: \"kubernetes.io/projected/785b5e03-10ad-4dcb-918d-88f56d0086a1-kube-api-access-n4v6k\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.734669 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.734679 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785b5e03-10ad-4dcb-918d-88f56d0086a1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.919673 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xxfj5"] Jan 21 10:57:52 crc kubenswrapper[5119]: I0121 10:57:52.925826 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xxfj5"] Jan 21 10:57:54 crc kubenswrapper[5119]: I0121 10:57:54.601230 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785b5e03-10ad-4dcb-918d-88f56d0086a1" path="/var/lib/kubelet/pods/785b5e03-10ad-4dcb-918d-88f56d0086a1/volumes" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.136418 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483218-t8hlk"] Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.137790 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="785b5e03-10ad-4dcb-918d-88f56d0086a1" containerName="extract-content" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.137809 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="785b5e03-10ad-4dcb-918d-88f56d0086a1" containerName="extract-content" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.137819 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="785b5e03-10ad-4dcb-918d-88f56d0086a1" containerName="registry-server" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.137824 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="785b5e03-10ad-4dcb-918d-88f56d0086a1" containerName="registry-server" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.137844 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d62b474-eb4a-40f0-a8b3-f53f096d7551" containerName="registry-server" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.137850 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d62b474-eb4a-40f0-a8b3-f53f096d7551" containerName="registry-server" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.137878 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="785b5e03-10ad-4dcb-918d-88f56d0086a1" containerName="extract-utilities" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.137883 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="785b5e03-10ad-4dcb-918d-88f56d0086a1" containerName="extract-utilities" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.138009 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="785b5e03-10ad-4dcb-918d-88f56d0086a1" containerName="registry-server" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.138024 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="6d62b474-eb4a-40f0-a8b3-f53f096d7551" containerName="registry-server" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.157705 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483218-t8hlk"] Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.157961 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483218-t8hlk" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.159880 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.159929 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.163029 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.247103 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhz9t\" (UniqueName: \"kubernetes.io/projected/06d8398a-ac2e-49e1-a854-f96510b839a1-kube-api-access-hhz9t\") pod \"auto-csr-approver-29483218-t8hlk\" (UID: \"06d8398a-ac2e-49e1-a854-f96510b839a1\") " pod="openshift-infra/auto-csr-approver-29483218-t8hlk" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.348339 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hhz9t\" (UniqueName: \"kubernetes.io/projected/06d8398a-ac2e-49e1-a854-f96510b839a1-kube-api-access-hhz9t\") pod \"auto-csr-approver-29483218-t8hlk\" (UID: \"06d8398a-ac2e-49e1-a854-f96510b839a1\") " pod="openshift-infra/auto-csr-approver-29483218-t8hlk" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.369198 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhz9t\" (UniqueName: \"kubernetes.io/projected/06d8398a-ac2e-49e1-a854-f96510b839a1-kube-api-access-hhz9t\") pod \"auto-csr-approver-29483218-t8hlk\" (UID: \"06d8398a-ac2e-49e1-a854-f96510b839a1\") " pod="openshift-infra/auto-csr-approver-29483218-t8hlk" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.477209 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483218-t8hlk" Jan 21 10:58:00 crc kubenswrapper[5119]: I0121 10:58:00.658470 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483218-t8hlk"] Jan 21 10:58:01 crc kubenswrapper[5119]: I0121 10:58:01.645596 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483218-t8hlk" event={"ID":"06d8398a-ac2e-49e1-a854-f96510b839a1","Type":"ContainerStarted","Data":"2b47ade53dbe46ea3f22959550740865f586cda4dc6efd29c43e4cac3cd2387d"} Jan 21 10:58:02 crc kubenswrapper[5119]: I0121 10:58:02.656770 5119 generic.go:358] "Generic (PLEG): container finished" podID="06d8398a-ac2e-49e1-a854-f96510b839a1" containerID="b134f0350900494a38e1131767102b55dcf3479692be6384ece7256cdb31d9cd" exitCode=0 Jan 21 10:58:02 crc kubenswrapper[5119]: I0121 10:58:02.656850 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483218-t8hlk" event={"ID":"06d8398a-ac2e-49e1-a854-f96510b839a1","Type":"ContainerDied","Data":"b134f0350900494a38e1131767102b55dcf3479692be6384ece7256cdb31d9cd"} Jan 21 10:58:03 crc kubenswrapper[5119]: I0121 10:58:03.880203 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483218-t8hlk" Jan 21 10:58:03 crc kubenswrapper[5119]: I0121 10:58:03.902713 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhz9t\" (UniqueName: \"kubernetes.io/projected/06d8398a-ac2e-49e1-a854-f96510b839a1-kube-api-access-hhz9t\") pod \"06d8398a-ac2e-49e1-a854-f96510b839a1\" (UID: \"06d8398a-ac2e-49e1-a854-f96510b839a1\") " Jan 21 10:58:03 crc kubenswrapper[5119]: I0121 10:58:03.911029 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d8398a-ac2e-49e1-a854-f96510b839a1-kube-api-access-hhz9t" (OuterVolumeSpecName: "kube-api-access-hhz9t") pod "06d8398a-ac2e-49e1-a854-f96510b839a1" (UID: "06d8398a-ac2e-49e1-a854-f96510b839a1"). InnerVolumeSpecName "kube-api-access-hhz9t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:58:04 crc kubenswrapper[5119]: I0121 10:58:04.004222 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hhz9t\" (UniqueName: \"kubernetes.io/projected/06d8398a-ac2e-49e1-a854-f96510b839a1-kube-api-access-hhz9t\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:04 crc kubenswrapper[5119]: I0121 10:58:04.677329 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483218-t8hlk" event={"ID":"06d8398a-ac2e-49e1-a854-f96510b839a1","Type":"ContainerDied","Data":"2b47ade53dbe46ea3f22959550740865f586cda4dc6efd29c43e4cac3cd2387d"} Jan 21 10:58:04 crc kubenswrapper[5119]: I0121 10:58:04.677375 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b47ade53dbe46ea3f22959550740865f586cda4dc6efd29c43e4cac3cd2387d" Jan 21 10:58:04 crc kubenswrapper[5119]: I0121 10:58:04.677443 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483218-t8hlk" Jan 21 10:58:04 crc kubenswrapper[5119]: I0121 10:58:04.936902 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483212-5fjkn"] Jan 21 10:58:04 crc kubenswrapper[5119]: I0121 10:58:04.944853 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483212-5fjkn"] Jan 21 10:58:06 crc kubenswrapper[5119]: I0121 10:58:06.598857 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c906c18e-cb01-4a29-b300-542b4fe6140b" path="/var/lib/kubelet/pods/c906c18e-cb01-4a29-b300-542b4fe6140b/volumes" Jan 21 10:58:54 crc kubenswrapper[5119]: I0121 10:58:54.092087 5119 scope.go:117] "RemoveContainer" containerID="f33f5ad8aa4c1ef657dac19b1dbbd675efa15e6dba2070f1b3ecfd10277ce9b4" Jan 21 10:59:19 crc kubenswrapper[5119]: I0121 10:59:19.918954 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:59:19 crc kubenswrapper[5119]: I0121 10:59:19.919650 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:59:46 crc kubenswrapper[5119]: I0121 10:59:46.954755 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:59:46 crc kubenswrapper[5119]: I0121 10:59:46.958254 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 10:59:46 crc kubenswrapper[5119]: I0121 10:59:46.960525 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:59:46 crc kubenswrapper[5119]: I0121 10:59:46.964237 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 10:59:49 crc kubenswrapper[5119]: I0121 10:59:49.919364 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:59:49 crc kubenswrapper[5119]: I0121 10:59:49.919704 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.557038 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kk7wl"] Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.558550 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="06d8398a-ac2e-49e1-a854-f96510b839a1" containerName="oc" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.558566 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d8398a-ac2e-49e1-a854-f96510b839a1" containerName="oc" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.558790 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="06d8398a-ac2e-49e1-a854-f96510b839a1" containerName="oc" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.576225 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kk7wl"] Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.591938 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk7wl" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.695590 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77tqr\" (UniqueName: \"kubernetes.io/projected/1ae351b7-7793-41a7-8f5a-58c590e972af-kube-api-access-77tqr\") pod \"community-operators-kk7wl\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " pod="openshift-marketplace/community-operators-kk7wl" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.695757 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-catalog-content\") pod \"community-operators-kk7wl\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " pod="openshift-marketplace/community-operators-kk7wl" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.695792 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-utilities\") pod \"community-operators-kk7wl\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " pod="openshift-marketplace/community-operators-kk7wl" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.797634 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77tqr\" (UniqueName: \"kubernetes.io/projected/1ae351b7-7793-41a7-8f5a-58c590e972af-kube-api-access-77tqr\") pod \"community-operators-kk7wl\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " pod="openshift-marketplace/community-operators-kk7wl" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.797714 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-catalog-content\") pod \"community-operators-kk7wl\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " pod="openshift-marketplace/community-operators-kk7wl" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.797751 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-utilities\") pod \"community-operators-kk7wl\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " pod="openshift-marketplace/community-operators-kk7wl" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.798428 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-utilities\") pod \"community-operators-kk7wl\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " pod="openshift-marketplace/community-operators-kk7wl" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.799114 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-catalog-content\") pod \"community-operators-kk7wl\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " pod="openshift-marketplace/community-operators-kk7wl" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.828260 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77tqr\" (UniqueName: \"kubernetes.io/projected/1ae351b7-7793-41a7-8f5a-58c590e972af-kube-api-access-77tqr\") pod \"community-operators-kk7wl\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " pod="openshift-marketplace/community-operators-kk7wl" Jan 21 10:59:52 crc kubenswrapper[5119]: I0121 10:59:52.913762 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk7wl" Jan 21 10:59:53 crc kubenswrapper[5119]: I0121 10:59:53.373574 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kk7wl"] Jan 21 10:59:53 crc kubenswrapper[5119]: I0121 10:59:53.602709 5119 generic.go:358] "Generic (PLEG): container finished" podID="1ae351b7-7793-41a7-8f5a-58c590e972af" containerID="e16f098dc6f1502de5a8e547df3c3a85af4e4759b89b495cc5d7da8f2ecb66ff" exitCode=0 Jan 21 10:59:53 crc kubenswrapper[5119]: I0121 10:59:53.602915 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk7wl" event={"ID":"1ae351b7-7793-41a7-8f5a-58c590e972af","Type":"ContainerDied","Data":"e16f098dc6f1502de5a8e547df3c3a85af4e4759b89b495cc5d7da8f2ecb66ff"} Jan 21 10:59:53 crc kubenswrapper[5119]: I0121 10:59:53.602991 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk7wl" event={"ID":"1ae351b7-7793-41a7-8f5a-58c590e972af","Type":"ContainerStarted","Data":"a6527a8c782dd01ea8c150382f3839992c836117ddf6b6fa122df3a66b451a20"} Jan 21 10:59:55 crc kubenswrapper[5119]: I0121 10:59:55.619056 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk7wl" event={"ID":"1ae351b7-7793-41a7-8f5a-58c590e972af","Type":"ContainerStarted","Data":"520d94772ce18a19277898b88956eaf3c70b8b6d2296cd0dd93735883a91932f"} Jan 21 10:59:56 crc kubenswrapper[5119]: I0121 10:59:56.645870 5119 generic.go:358] "Generic (PLEG): container finished" podID="1ae351b7-7793-41a7-8f5a-58c590e972af" containerID="520d94772ce18a19277898b88956eaf3c70b8b6d2296cd0dd93735883a91932f" exitCode=0 Jan 21 10:59:56 crc kubenswrapper[5119]: I0121 10:59:56.645973 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk7wl" event={"ID":"1ae351b7-7793-41a7-8f5a-58c590e972af","Type":"ContainerDied","Data":"520d94772ce18a19277898b88956eaf3c70b8b6d2296cd0dd93735883a91932f"} Jan 21 10:59:57 crc kubenswrapper[5119]: I0121 10:59:57.676994 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk7wl" event={"ID":"1ae351b7-7793-41a7-8f5a-58c590e972af","Type":"ContainerStarted","Data":"610f3d3dcc6f0a5dfb3ff468ddb5f62d79be143b71e13abd258bd37dad491eae"} Jan 21 10:59:57 crc kubenswrapper[5119]: I0121 10:59:57.699345 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kk7wl" podStartSLOduration=3.945455722 podStartE2EDuration="5.699328197s" podCreationTimestamp="2026-01-21 10:59:52 +0000 UTC" firstStartedPulling="2026-01-21 10:59:53.603701413 +0000 UTC m=+3909.271793091" lastFinishedPulling="2026-01-21 10:59:55.357573888 +0000 UTC m=+3911.025665566" observedRunningTime="2026-01-21 10:59:57.695249776 +0000 UTC m=+3913.363341464" watchObservedRunningTime="2026-01-21 10:59:57.699328197 +0000 UTC m=+3913.367419875" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.136684 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483220-4x8w9"] Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.161107 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483220-4x8w9"] Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.161327 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483220-4x8w9" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.163434 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.164624 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.164763 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.243358 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm"] Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.253020 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.256172 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.256202 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.258890 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm"] Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.309428 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g6wv\" (UniqueName: \"kubernetes.io/projected/5a203e08-866e-47d6-b57f-37cbafc005f9-kube-api-access-7g6wv\") pod \"auto-csr-approver-29483220-4x8w9\" (UID: \"5a203e08-866e-47d6-b57f-37cbafc005f9\") " pod="openshift-infra/auto-csr-approver-29483220-4x8w9" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.410483 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvkvq\" (UniqueName: \"kubernetes.io/projected/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-kube-api-access-wvkvq\") pod \"collect-profiles-29483220-7p9wm\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.410554 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6wv\" (UniqueName: \"kubernetes.io/projected/5a203e08-866e-47d6-b57f-37cbafc005f9-kube-api-access-7g6wv\") pod \"auto-csr-approver-29483220-4x8w9\" (UID: \"5a203e08-866e-47d6-b57f-37cbafc005f9\") " pod="openshift-infra/auto-csr-approver-29483220-4x8w9" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.410718 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-secret-volume\") pod \"collect-profiles-29483220-7p9wm\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.410761 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-config-volume\") pod \"collect-profiles-29483220-7p9wm\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.435781 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g6wv\" (UniqueName: \"kubernetes.io/projected/5a203e08-866e-47d6-b57f-37cbafc005f9-kube-api-access-7g6wv\") pod \"auto-csr-approver-29483220-4x8w9\" (UID: \"5a203e08-866e-47d6-b57f-37cbafc005f9\") " pod="openshift-infra/auto-csr-approver-29483220-4x8w9" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.480825 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483220-4x8w9" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.512001 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-secret-volume\") pod \"collect-profiles-29483220-7p9wm\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.512230 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-config-volume\") pod \"collect-profiles-29483220-7p9wm\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.512417 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wvkvq\" (UniqueName: \"kubernetes.io/projected/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-kube-api-access-wvkvq\") pod \"collect-profiles-29483220-7p9wm\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.513641 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-config-volume\") pod \"collect-profiles-29483220-7p9wm\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.517512 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-secret-volume\") pod \"collect-profiles-29483220-7p9wm\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.528422 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvkvq\" (UniqueName: \"kubernetes.io/projected/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-kube-api-access-wvkvq\") pod \"collect-profiles-29483220-7p9wm\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.570156 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:00 crc kubenswrapper[5119]: I0121 11:00:00.862785 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483220-4x8w9"] Jan 21 11:00:01 crc kubenswrapper[5119]: I0121 11:00:01.005450 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm"] Jan 21 11:00:01 crc kubenswrapper[5119]: W0121 11:00:01.012282 5119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd49de6d4_3efb_4ff0_9271_e57fbf99b28b.slice/crio-2f904cac3a6f20f6a4592ef9db402de111a1da02be2250a6fc0b6ae576c79188 WatchSource:0}: Error finding container 2f904cac3a6f20f6a4592ef9db402de111a1da02be2250a6fc0b6ae576c79188: Status 404 returned error can't find the container with id 2f904cac3a6f20f6a4592ef9db402de111a1da02be2250a6fc0b6ae576c79188 Jan 21 11:00:01 crc kubenswrapper[5119]: I0121 11:00:01.711001 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483220-4x8w9" event={"ID":"5a203e08-866e-47d6-b57f-37cbafc005f9","Type":"ContainerStarted","Data":"3e12fd013fbffa3dedd4d0e2d6437ae40c2f37e512b763a952fd7199a2c03086"} Jan 21 11:00:01 crc kubenswrapper[5119]: I0121 11:00:01.712299 5119 generic.go:358] "Generic (PLEG): container finished" podID="d49de6d4-3efb-4ff0-9271-e57fbf99b28b" containerID="e946d8199d0c5085136b9d4abea2c3670e54f1d55e0edae14b79113a21143967" exitCode=0 Jan 21 11:00:01 crc kubenswrapper[5119]: I0121 11:00:01.712348 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" event={"ID":"d49de6d4-3efb-4ff0-9271-e57fbf99b28b","Type":"ContainerDied","Data":"e946d8199d0c5085136b9d4abea2c3670e54f1d55e0edae14b79113a21143967"} Jan 21 11:00:01 crc kubenswrapper[5119]: I0121 11:00:01.712362 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" event={"ID":"d49de6d4-3efb-4ff0-9271-e57fbf99b28b","Type":"ContainerStarted","Data":"2f904cac3a6f20f6a4592ef9db402de111a1da02be2250a6fc0b6ae576c79188"} Jan 21 11:00:02 crc kubenswrapper[5119]: I0121 11:00:02.722214 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483220-4x8w9" event={"ID":"5a203e08-866e-47d6-b57f-37cbafc005f9","Type":"ContainerStarted","Data":"49cd63696e0d2fbb7609c9372de90327a56299f76c547185c0fa0b039add8204"} Jan 21 11:00:02 crc kubenswrapper[5119]: I0121 11:00:02.740482 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483220-4x8w9" podStartSLOduration=1.234003081 podStartE2EDuration="2.740463671s" podCreationTimestamp="2026-01-21 11:00:00 +0000 UTC" firstStartedPulling="2026-01-21 11:00:00.869252041 +0000 UTC m=+3916.537343719" lastFinishedPulling="2026-01-21 11:00:02.375712621 +0000 UTC m=+3918.043804309" observedRunningTime="2026-01-21 11:00:02.735937468 +0000 UTC m=+3918.404029166" watchObservedRunningTime="2026-01-21 11:00:02.740463671 +0000 UTC m=+3918.408555349" Jan 21 11:00:02 crc kubenswrapper[5119]: I0121 11:00:02.914230 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kk7wl" Jan 21 11:00:02 crc kubenswrapper[5119]: I0121 11:00:02.914277 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-kk7wl" Jan 21 11:00:02 crc kubenswrapper[5119]: I0121 11:00:02.958225 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kk7wl" Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.051724 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.156043 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-config-volume\") pod \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.157052 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-secret-volume\") pod \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.157128 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvkvq\" (UniqueName: \"kubernetes.io/projected/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-kube-api-access-wvkvq\") pod \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\" (UID: \"d49de6d4-3efb-4ff0-9271-e57fbf99b28b\") " Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.156951 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-config-volume" (OuterVolumeSpecName: "config-volume") pod "d49de6d4-3efb-4ff0-9271-e57fbf99b28b" (UID: "d49de6d4-3efb-4ff0-9271-e57fbf99b28b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.163137 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d49de6d4-3efb-4ff0-9271-e57fbf99b28b" (UID: "d49de6d4-3efb-4ff0-9271-e57fbf99b28b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.163295 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-kube-api-access-wvkvq" (OuterVolumeSpecName: "kube-api-access-wvkvq") pod "d49de6d4-3efb-4ff0-9271-e57fbf99b28b" (UID: "d49de6d4-3efb-4ff0-9271-e57fbf99b28b"). InnerVolumeSpecName "kube-api-access-wvkvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.259294 5119 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.259839 5119 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.259911 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wvkvq\" (UniqueName: \"kubernetes.io/projected/d49de6d4-3efb-4ff0-9271-e57fbf99b28b-kube-api-access-wvkvq\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.732810 5119 generic.go:358] "Generic (PLEG): container finished" podID="5a203e08-866e-47d6-b57f-37cbafc005f9" containerID="49cd63696e0d2fbb7609c9372de90327a56299f76c547185c0fa0b039add8204" exitCode=0 Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.732865 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483220-4x8w9" event={"ID":"5a203e08-866e-47d6-b57f-37cbafc005f9","Type":"ContainerDied","Data":"49cd63696e0d2fbb7609c9372de90327a56299f76c547185c0fa0b039add8204"} Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.735718 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.736472 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-7p9wm" event={"ID":"d49de6d4-3efb-4ff0-9271-e57fbf99b28b","Type":"ContainerDied","Data":"2f904cac3a6f20f6a4592ef9db402de111a1da02be2250a6fc0b6ae576c79188"} Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.736656 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f904cac3a6f20f6a4592ef9db402de111a1da02be2250a6fc0b6ae576c79188" Jan 21 11:00:03 crc kubenswrapper[5119]: I0121 11:00:03.775295 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kk7wl" Jan 21 11:00:04 crc kubenswrapper[5119]: I0121 11:00:04.117250 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7"] Jan 21 11:00:04 crc kubenswrapper[5119]: I0121 11:00:04.122743 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-h66w7"] Jan 21 11:00:04 crc kubenswrapper[5119]: I0121 11:00:04.600156 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7" path="/var/lib/kubelet/pods/0b3ef4ce-36c1-4d43-8a75-eb4aa59dc4a7/volumes" Jan 21 11:00:05 crc kubenswrapper[5119]: I0121 11:00:05.082676 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483220-4x8w9" Jan 21 11:00:05 crc kubenswrapper[5119]: I0121 11:00:05.191360 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g6wv\" (UniqueName: \"kubernetes.io/projected/5a203e08-866e-47d6-b57f-37cbafc005f9-kube-api-access-7g6wv\") pod \"5a203e08-866e-47d6-b57f-37cbafc005f9\" (UID: \"5a203e08-866e-47d6-b57f-37cbafc005f9\") " Jan 21 11:00:05 crc kubenswrapper[5119]: I0121 11:00:05.200335 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a203e08-866e-47d6-b57f-37cbafc005f9-kube-api-access-7g6wv" (OuterVolumeSpecName: "kube-api-access-7g6wv") pod "5a203e08-866e-47d6-b57f-37cbafc005f9" (UID: "5a203e08-866e-47d6-b57f-37cbafc005f9"). InnerVolumeSpecName "kube-api-access-7g6wv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:00:05 crc kubenswrapper[5119]: I0121 11:00:05.292447 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7g6wv\" (UniqueName: \"kubernetes.io/projected/5a203e08-866e-47d6-b57f-37cbafc005f9-kube-api-access-7g6wv\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:05 crc kubenswrapper[5119]: I0121 11:00:05.750737 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483220-4x8w9" event={"ID":"5a203e08-866e-47d6-b57f-37cbafc005f9","Type":"ContainerDied","Data":"3e12fd013fbffa3dedd4d0e2d6437ae40c2f37e512b763a952fd7199a2c03086"} Jan 21 11:00:05 crc kubenswrapper[5119]: I0121 11:00:05.750777 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e12fd013fbffa3dedd4d0e2d6437ae40c2f37e512b763a952fd7199a2c03086" Jan 21 11:00:05 crc kubenswrapper[5119]: I0121 11:00:05.750837 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483220-4x8w9" Jan 21 11:00:05 crc kubenswrapper[5119]: I0121 11:00:05.781473 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483214-p6qcs"] Jan 21 11:00:05 crc kubenswrapper[5119]: I0121 11:00:05.788016 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483214-p6qcs"] Jan 21 11:00:06 crc kubenswrapper[5119]: I0121 11:00:06.598873 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e275e5e9-b485-42da-be23-5bc0fa02d065" path="/var/lib/kubelet/pods/e275e5e9-b485-42da-be23-5bc0fa02d065/volumes" Jan 21 11:00:06 crc kubenswrapper[5119]: I0121 11:00:06.927135 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kk7wl"] Jan 21 11:00:06 crc kubenswrapper[5119]: I0121 11:00:06.927454 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kk7wl" podUID="1ae351b7-7793-41a7-8f5a-58c590e972af" containerName="registry-server" containerID="cri-o://610f3d3dcc6f0a5dfb3ff468ddb5f62d79be143b71e13abd258bd37dad491eae" gracePeriod=2 Jan 21 11:00:07 crc kubenswrapper[5119]: I0121 11:00:07.767557 5119 generic.go:358] "Generic (PLEG): container finished" podID="1ae351b7-7793-41a7-8f5a-58c590e972af" containerID="610f3d3dcc6f0a5dfb3ff468ddb5f62d79be143b71e13abd258bd37dad491eae" exitCode=0 Jan 21 11:00:07 crc kubenswrapper[5119]: I0121 11:00:07.768479 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk7wl" event={"ID":"1ae351b7-7793-41a7-8f5a-58c590e972af","Type":"ContainerDied","Data":"610f3d3dcc6f0a5dfb3ff468ddb5f62d79be143b71e13abd258bd37dad491eae"} Jan 21 11:00:07 crc kubenswrapper[5119]: I0121 11:00:07.911032 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk7wl" Jan 21 11:00:07 crc kubenswrapper[5119]: I0121 11:00:07.932593 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-utilities\") pod \"1ae351b7-7793-41a7-8f5a-58c590e972af\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " Jan 21 11:00:07 crc kubenswrapper[5119]: I0121 11:00:07.932926 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-catalog-content\") pod \"1ae351b7-7793-41a7-8f5a-58c590e972af\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " Jan 21 11:00:07 crc kubenswrapper[5119]: I0121 11:00:07.933086 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77tqr\" (UniqueName: \"kubernetes.io/projected/1ae351b7-7793-41a7-8f5a-58c590e972af-kube-api-access-77tqr\") pod \"1ae351b7-7793-41a7-8f5a-58c590e972af\" (UID: \"1ae351b7-7793-41a7-8f5a-58c590e972af\") " Jan 21 11:00:07 crc kubenswrapper[5119]: I0121 11:00:07.933822 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-utilities" (OuterVolumeSpecName: "utilities") pod "1ae351b7-7793-41a7-8f5a-58c590e972af" (UID: "1ae351b7-7793-41a7-8f5a-58c590e972af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 11:00:07 crc kubenswrapper[5119]: I0121 11:00:07.942060 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae351b7-7793-41a7-8f5a-58c590e972af-kube-api-access-77tqr" (OuterVolumeSpecName: "kube-api-access-77tqr") pod "1ae351b7-7793-41a7-8f5a-58c590e972af" (UID: "1ae351b7-7793-41a7-8f5a-58c590e972af"). InnerVolumeSpecName "kube-api-access-77tqr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:00:07 crc kubenswrapper[5119]: I0121 11:00:07.982994 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ae351b7-7793-41a7-8f5a-58c590e972af" (UID: "1ae351b7-7793-41a7-8f5a-58c590e972af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 11:00:08 crc kubenswrapper[5119]: I0121 11:00:08.034702 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-77tqr\" (UniqueName: \"kubernetes.io/projected/1ae351b7-7793-41a7-8f5a-58c590e972af-kube-api-access-77tqr\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:08 crc kubenswrapper[5119]: I0121 11:00:08.034753 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:08 crc kubenswrapper[5119]: I0121 11:00:08.034766 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae351b7-7793-41a7-8f5a-58c590e972af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:08 crc kubenswrapper[5119]: I0121 11:00:08.778382 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk7wl" event={"ID":"1ae351b7-7793-41a7-8f5a-58c590e972af","Type":"ContainerDied","Data":"a6527a8c782dd01ea8c150382f3839992c836117ddf6b6fa122df3a66b451a20"} Jan 21 11:00:08 crc kubenswrapper[5119]: I0121 11:00:08.778466 5119 scope.go:117] "RemoveContainer" containerID="610f3d3dcc6f0a5dfb3ff468ddb5f62d79be143b71e13abd258bd37dad491eae" Jan 21 11:00:08 crc kubenswrapper[5119]: I0121 11:00:08.778410 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk7wl" Jan 21 11:00:08 crc kubenswrapper[5119]: I0121 11:00:08.796453 5119 scope.go:117] "RemoveContainer" containerID="520d94772ce18a19277898b88956eaf3c70b8b6d2296cd0dd93735883a91932f" Jan 21 11:00:08 crc kubenswrapper[5119]: I0121 11:00:08.803059 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kk7wl"] Jan 21 11:00:08 crc kubenswrapper[5119]: I0121 11:00:08.807915 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kk7wl"] Jan 21 11:00:08 crc kubenswrapper[5119]: I0121 11:00:08.814808 5119 scope.go:117] "RemoveContainer" containerID="e16f098dc6f1502de5a8e547df3c3a85af4e4759b89b495cc5d7da8f2ecb66ff" Jan 21 11:00:10 crc kubenswrapper[5119]: I0121 11:00:10.605849 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ae351b7-7793-41a7-8f5a-58c590e972af" path="/var/lib/kubelet/pods/1ae351b7-7793-41a7-8f5a-58c590e972af/volumes" Jan 21 11:00:19 crc kubenswrapper[5119]: I0121 11:00:19.919903 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:00:19 crc kubenswrapper[5119]: I0121 11:00:19.921131 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:00:19 crc kubenswrapper[5119]: I0121 11:00:19.921252 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 11:00:19 crc kubenswrapper[5119]: I0121 11:00:19.922767 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:00:19 crc kubenswrapper[5119]: I0121 11:00:19.922917 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" gracePeriod=600 Jan 21 11:00:20 crc kubenswrapper[5119]: E0121 11:00:20.047321 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:00:20 crc kubenswrapper[5119]: I0121 11:00:20.874361 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" exitCode=0 Jan 21 11:00:20 crc kubenswrapper[5119]: I0121 11:00:20.874633 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a"} Jan 21 11:00:20 crc kubenswrapper[5119]: I0121 11:00:20.874665 5119 scope.go:117] "RemoveContainer" containerID="6c6c6443973f84ae68f595b28489b4574cb199d66372a1d02488d64455df6eb3" Jan 21 11:00:20 crc kubenswrapper[5119]: I0121 11:00:20.875129 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:00:20 crc kubenswrapper[5119]: E0121 11:00:20.875365 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:00:33 crc kubenswrapper[5119]: I0121 11:00:33.591341 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:00:33 crc kubenswrapper[5119]: E0121 11:00:33.592341 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.259301 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hcxbs"] Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261068 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1ae351b7-7793-41a7-8f5a-58c590e972af" containerName="registry-server" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261083 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae351b7-7793-41a7-8f5a-58c590e972af" containerName="registry-server" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261128 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a203e08-866e-47d6-b57f-37cbafc005f9" containerName="oc" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261134 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a203e08-866e-47d6-b57f-37cbafc005f9" containerName="oc" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261147 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1ae351b7-7793-41a7-8f5a-58c590e972af" containerName="extract-utilities" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261154 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae351b7-7793-41a7-8f5a-58c590e972af" containerName="extract-utilities" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261162 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d49de6d4-3efb-4ff0-9271-e57fbf99b28b" containerName="collect-profiles" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261167 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49de6d4-3efb-4ff0-9271-e57fbf99b28b" containerName="collect-profiles" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261178 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1ae351b7-7793-41a7-8f5a-58c590e972af" containerName="extract-content" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261184 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae351b7-7793-41a7-8f5a-58c590e972af" containerName="extract-content" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261300 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="1ae351b7-7793-41a7-8f5a-58c590e972af" containerName="registry-server" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261311 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="d49de6d4-3efb-4ff0-9271-e57fbf99b28b" containerName="collect-profiles" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.261322 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="5a203e08-866e-47d6-b57f-37cbafc005f9" containerName="oc" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.267829 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.277443 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcxbs"] Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.354716 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mttwx\" (UniqueName: \"kubernetes.io/projected/1b40e945-8cca-482b-8314-d46529e21206-kube-api-access-mttwx\") pod \"redhat-operators-hcxbs\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.354805 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-catalog-content\") pod \"redhat-operators-hcxbs\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.354834 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-utilities\") pod \"redhat-operators-hcxbs\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.456842 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mttwx\" (UniqueName: \"kubernetes.io/projected/1b40e945-8cca-482b-8314-d46529e21206-kube-api-access-mttwx\") pod \"redhat-operators-hcxbs\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.456920 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-catalog-content\") pod \"redhat-operators-hcxbs\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.456940 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-utilities\") pod \"redhat-operators-hcxbs\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.457483 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-utilities\") pod \"redhat-operators-hcxbs\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.457672 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-catalog-content\") pod \"redhat-operators-hcxbs\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.475776 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mttwx\" (UniqueName: \"kubernetes.io/projected/1b40e945-8cca-482b-8314-d46529e21206-kube-api-access-mttwx\") pod \"redhat-operators-hcxbs\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:44 crc kubenswrapper[5119]: I0121 11:00:44.596165 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:45 crc kubenswrapper[5119]: I0121 11:00:45.057374 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcxbs"] Jan 21 11:00:45 crc kubenswrapper[5119]: I0121 11:00:45.714568 5119 generic.go:358] "Generic (PLEG): container finished" podID="1b40e945-8cca-482b-8314-d46529e21206" containerID="f01b41bcb903d2200eb3e94850a6f29f6741d37c7aafd15b166a6233b3269cb1" exitCode=0 Jan 21 11:00:45 crc kubenswrapper[5119]: I0121 11:00:45.714638 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcxbs" event={"ID":"1b40e945-8cca-482b-8314-d46529e21206","Type":"ContainerDied","Data":"f01b41bcb903d2200eb3e94850a6f29f6741d37c7aafd15b166a6233b3269cb1"} Jan 21 11:00:45 crc kubenswrapper[5119]: I0121 11:00:45.714930 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcxbs" event={"ID":"1b40e945-8cca-482b-8314-d46529e21206","Type":"ContainerStarted","Data":"ba4f82ba4b38b31b66a55f5c31c517eed2a8103c273748a9739e7d07fc031cfe"} Jan 21 11:00:47 crc kubenswrapper[5119]: I0121 11:00:47.591186 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:00:47 crc kubenswrapper[5119]: E0121 11:00:47.592128 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:00:47 crc kubenswrapper[5119]: I0121 11:00:47.733850 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcxbs" event={"ID":"1b40e945-8cca-482b-8314-d46529e21206","Type":"ContainerStarted","Data":"5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b"} Jan 21 11:00:48 crc kubenswrapper[5119]: I0121 11:00:48.741801 5119 generic.go:358] "Generic (PLEG): container finished" podID="1b40e945-8cca-482b-8314-d46529e21206" containerID="5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b" exitCode=0 Jan 21 11:00:48 crc kubenswrapper[5119]: I0121 11:00:48.742042 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcxbs" event={"ID":"1b40e945-8cca-482b-8314-d46529e21206","Type":"ContainerDied","Data":"5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b"} Jan 21 11:00:50 crc kubenswrapper[5119]: I0121 11:00:50.757478 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcxbs" event={"ID":"1b40e945-8cca-482b-8314-d46529e21206","Type":"ContainerStarted","Data":"c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07"} Jan 21 11:00:50 crc kubenswrapper[5119]: I0121 11:00:50.778461 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hcxbs" podStartSLOduration=6.082124277 podStartE2EDuration="6.777787359s" podCreationTimestamp="2026-01-21 11:00:44 +0000 UTC" firstStartedPulling="2026-01-21 11:00:45.71579283 +0000 UTC m=+3961.383884508" lastFinishedPulling="2026-01-21 11:00:46.411455912 +0000 UTC m=+3962.079547590" observedRunningTime="2026-01-21 11:00:50.774372016 +0000 UTC m=+3966.442463694" watchObservedRunningTime="2026-01-21 11:00:50.777787359 +0000 UTC m=+3966.445879057" Jan 21 11:00:54 crc kubenswrapper[5119]: I0121 11:00:54.235189 5119 scope.go:117] "RemoveContainer" containerID="ee7750be22e32d41af7c431d9997ebd1f86bd2391c9d64cbd31557e69aef8ed0" Jan 21 11:00:54 crc kubenswrapper[5119]: I0121 11:00:54.309435 5119 scope.go:117] "RemoveContainer" containerID="2b322f1d7d78f08e77443a5f3021fc045b560ada86c757684b1962d8710ad259" Jan 21 11:00:54 crc kubenswrapper[5119]: I0121 11:00:54.599999 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:54 crc kubenswrapper[5119]: I0121 11:00:54.600344 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:00:55 crc kubenswrapper[5119]: I0121 11:00:55.644197 5119 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hcxbs" podUID="1b40e945-8cca-482b-8314-d46529e21206" containerName="registry-server" probeResult="failure" output=< Jan 21 11:00:55 crc kubenswrapper[5119]: timeout: failed to connect service ":50051" within 1s Jan 21 11:00:55 crc kubenswrapper[5119]: > Jan 21 11:01:01 crc kubenswrapper[5119]: I0121 11:01:01.591050 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:01:01 crc kubenswrapper[5119]: E0121 11:01:01.591809 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:01:04 crc kubenswrapper[5119]: I0121 11:01:04.642310 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:01:04 crc kubenswrapper[5119]: I0121 11:01:04.682271 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:01:04 crc kubenswrapper[5119]: I0121 11:01:04.874040 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcxbs"] Jan 21 11:01:05 crc kubenswrapper[5119]: I0121 11:01:05.920842 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hcxbs" podUID="1b40e945-8cca-482b-8314-d46529e21206" containerName="registry-server" containerID="cri-o://c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07" gracePeriod=2 Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.298060 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.376318 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-catalog-content\") pod \"1b40e945-8cca-482b-8314-d46529e21206\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.376390 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-utilities\") pod \"1b40e945-8cca-482b-8314-d46529e21206\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.376478 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mttwx\" (UniqueName: \"kubernetes.io/projected/1b40e945-8cca-482b-8314-d46529e21206-kube-api-access-mttwx\") pod \"1b40e945-8cca-482b-8314-d46529e21206\" (UID: \"1b40e945-8cca-482b-8314-d46529e21206\") " Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.378443 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-utilities" (OuterVolumeSpecName: "utilities") pod "1b40e945-8cca-482b-8314-d46529e21206" (UID: "1b40e945-8cca-482b-8314-d46529e21206"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.383012 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b40e945-8cca-482b-8314-d46529e21206-kube-api-access-mttwx" (OuterVolumeSpecName: "kube-api-access-mttwx") pod "1b40e945-8cca-482b-8314-d46529e21206" (UID: "1b40e945-8cca-482b-8314-d46529e21206"). InnerVolumeSpecName "kube-api-access-mttwx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.479003 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mttwx\" (UniqueName: \"kubernetes.io/projected/1b40e945-8cca-482b-8314-d46529e21206-kube-api-access-mttwx\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.479037 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.481371 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b40e945-8cca-482b-8314-d46529e21206" (UID: "1b40e945-8cca-482b-8314-d46529e21206"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.579893 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40e945-8cca-482b-8314-d46529e21206-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:06 crc kubenswrapper[5119]: E0121 11:01:06.630782 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b40e945_8cca_482b_8314_d46529e21206.slice/crio-ba4f82ba4b38b31b66a55f5c31c517eed2a8103c273748a9739e7d07fc031cfe\": RecentStats: unable to find data in memory cache]" Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.930975 5119 generic.go:358] "Generic (PLEG): container finished" podID="1b40e945-8cca-482b-8314-d46529e21206" containerID="c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07" exitCode=0 Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.931037 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcxbs" event={"ID":"1b40e945-8cca-482b-8314-d46529e21206","Type":"ContainerDied","Data":"c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07"} Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.931060 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcxbs" event={"ID":"1b40e945-8cca-482b-8314-d46529e21206","Type":"ContainerDied","Data":"ba4f82ba4b38b31b66a55f5c31c517eed2a8103c273748a9739e7d07fc031cfe"} Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.931076 5119 scope.go:117] "RemoveContainer" containerID="c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07" Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.931215 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcxbs" Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.956716 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcxbs"] Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.962928 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hcxbs"] Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.963635 5119 scope.go:117] "RemoveContainer" containerID="5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b" Jan 21 11:01:06 crc kubenswrapper[5119]: I0121 11:01:06.996916 5119 scope.go:117] "RemoveContainer" containerID="f01b41bcb903d2200eb3e94850a6f29f6741d37c7aafd15b166a6233b3269cb1" Jan 21 11:01:07 crc kubenswrapper[5119]: I0121 11:01:07.012384 5119 scope.go:117] "RemoveContainer" containerID="c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07" Jan 21 11:01:07 crc kubenswrapper[5119]: E0121 11:01:07.012956 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07\": container with ID starting with c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07 not found: ID does not exist" containerID="c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07" Jan 21 11:01:07 crc kubenswrapper[5119]: I0121 11:01:07.013006 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07"} err="failed to get container status \"c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07\": rpc error: code = NotFound desc = could not find container \"c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07\": container with ID starting with c89b725f65d69ab94bacd950a70e6d19ea40bad335c8ebf5411839ce8d86dc07 not found: ID does not exist" Jan 21 11:01:07 crc kubenswrapper[5119]: I0121 11:01:07.013026 5119 scope.go:117] "RemoveContainer" containerID="5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b" Jan 21 11:01:07 crc kubenswrapper[5119]: E0121 11:01:07.013426 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b\": container with ID starting with 5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b not found: ID does not exist" containerID="5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b" Jan 21 11:01:07 crc kubenswrapper[5119]: I0121 11:01:07.013469 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b"} err="failed to get container status \"5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b\": rpc error: code = NotFound desc = could not find container \"5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b\": container with ID starting with 5c62eed9a57c85df7143344643508c9263018ff8476a5c3de609cf4db24bc59b not found: ID does not exist" Jan 21 11:01:07 crc kubenswrapper[5119]: I0121 11:01:07.013495 5119 scope.go:117] "RemoveContainer" containerID="f01b41bcb903d2200eb3e94850a6f29f6741d37c7aafd15b166a6233b3269cb1" Jan 21 11:01:07 crc kubenswrapper[5119]: E0121 11:01:07.013966 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01b41bcb903d2200eb3e94850a6f29f6741d37c7aafd15b166a6233b3269cb1\": container with ID starting with f01b41bcb903d2200eb3e94850a6f29f6741d37c7aafd15b166a6233b3269cb1 not found: ID does not exist" containerID="f01b41bcb903d2200eb3e94850a6f29f6741d37c7aafd15b166a6233b3269cb1" Jan 21 11:01:07 crc kubenswrapper[5119]: I0121 11:01:07.013992 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01b41bcb903d2200eb3e94850a6f29f6741d37c7aafd15b166a6233b3269cb1"} err="failed to get container status \"f01b41bcb903d2200eb3e94850a6f29f6741d37c7aafd15b166a6233b3269cb1\": rpc error: code = NotFound desc = could not find container \"f01b41bcb903d2200eb3e94850a6f29f6741d37c7aafd15b166a6233b3269cb1\": container with ID starting with f01b41bcb903d2200eb3e94850a6f29f6741d37c7aafd15b166a6233b3269cb1 not found: ID does not exist" Jan 21 11:01:08 crc kubenswrapper[5119]: I0121 11:01:08.601665 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b40e945-8cca-482b-8314-d46529e21206" path="/var/lib/kubelet/pods/1b40e945-8cca-482b-8314-d46529e21206/volumes" Jan 21 11:01:16 crc kubenswrapper[5119]: I0121 11:01:16.590527 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:01:16 crc kubenswrapper[5119]: E0121 11:01:16.591394 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:01:27 crc kubenswrapper[5119]: I0121 11:01:27.591471 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:01:27 crc kubenswrapper[5119]: E0121 11:01:27.592289 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:01:38 crc kubenswrapper[5119]: I0121 11:01:38.591644 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:01:38 crc kubenswrapper[5119]: E0121 11:01:38.593484 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:01:53 crc kubenswrapper[5119]: I0121 11:01:53.591001 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:01:53 crc kubenswrapper[5119]: E0121 11:01:53.591902 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.131374 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483222-hnfhz"] Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.132333 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1b40e945-8cca-482b-8314-d46529e21206" containerName="registry-server" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.132345 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b40e945-8cca-482b-8314-d46529e21206" containerName="registry-server" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.132356 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1b40e945-8cca-482b-8314-d46529e21206" containerName="extract-content" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.132362 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b40e945-8cca-482b-8314-d46529e21206" containerName="extract-content" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.132376 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1b40e945-8cca-482b-8314-d46529e21206" containerName="extract-utilities" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.132383 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b40e945-8cca-482b-8314-d46529e21206" containerName="extract-utilities" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.132489 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="1b40e945-8cca-482b-8314-d46529e21206" containerName="registry-server" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.152834 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483222-hnfhz"] Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.152978 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483222-hnfhz" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.156733 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.157188 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.157395 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.306585 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9dzt\" (UniqueName: \"kubernetes.io/projected/e16ebb15-774e-406c-8c4d-640547741054-kube-api-access-v9dzt\") pod \"auto-csr-approver-29483222-hnfhz\" (UID: \"e16ebb15-774e-406c-8c4d-640547741054\") " pod="openshift-infra/auto-csr-approver-29483222-hnfhz" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.407994 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v9dzt\" (UniqueName: \"kubernetes.io/projected/e16ebb15-774e-406c-8c4d-640547741054-kube-api-access-v9dzt\") pod \"auto-csr-approver-29483222-hnfhz\" (UID: \"e16ebb15-774e-406c-8c4d-640547741054\") " pod="openshift-infra/auto-csr-approver-29483222-hnfhz" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.433325 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9dzt\" (UniqueName: \"kubernetes.io/projected/e16ebb15-774e-406c-8c4d-640547741054-kube-api-access-v9dzt\") pod \"auto-csr-approver-29483222-hnfhz\" (UID: \"e16ebb15-774e-406c-8c4d-640547741054\") " pod="openshift-infra/auto-csr-approver-29483222-hnfhz" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.471206 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483222-hnfhz" Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.857728 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483222-hnfhz"] Jan 21 11:02:00 crc kubenswrapper[5119]: I0121 11:02:00.888753 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:02:01 crc kubenswrapper[5119]: I0121 11:02:01.344464 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483222-hnfhz" event={"ID":"e16ebb15-774e-406c-8c4d-640547741054","Type":"ContainerStarted","Data":"8021a52da5c27254da6a955d3ac3649d3f7e3ecca86ccf00032f16ebf834ffb3"} Jan 21 11:02:02 crc kubenswrapper[5119]: I0121 11:02:02.355710 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483222-hnfhz" event={"ID":"e16ebb15-774e-406c-8c4d-640547741054","Type":"ContainerStarted","Data":"a3601342c0d0a26fc8ecd211bf731fc93f6eefbcfdd054e53fe60aa4994c4452"} Jan 21 11:02:02 crc kubenswrapper[5119]: I0121 11:02:02.370999 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483222-hnfhz" podStartSLOduration=1.2570509109999999 podStartE2EDuration="2.370982861s" podCreationTimestamp="2026-01-21 11:02:00 +0000 UTC" firstStartedPulling="2026-01-21 11:02:00.889840863 +0000 UTC m=+4036.557932541" lastFinishedPulling="2026-01-21 11:02:02.003772813 +0000 UTC m=+4037.671864491" observedRunningTime="2026-01-21 11:02:02.370143809 +0000 UTC m=+4038.038235487" watchObservedRunningTime="2026-01-21 11:02:02.370982861 +0000 UTC m=+4038.039074529" Jan 21 11:02:03 crc kubenswrapper[5119]: I0121 11:02:03.364399 5119 generic.go:358] "Generic (PLEG): container finished" podID="e16ebb15-774e-406c-8c4d-640547741054" containerID="a3601342c0d0a26fc8ecd211bf731fc93f6eefbcfdd054e53fe60aa4994c4452" exitCode=0 Jan 21 11:02:03 crc kubenswrapper[5119]: I0121 11:02:03.364827 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483222-hnfhz" event={"ID":"e16ebb15-774e-406c-8c4d-640547741054","Type":"ContainerDied","Data":"a3601342c0d0a26fc8ecd211bf731fc93f6eefbcfdd054e53fe60aa4994c4452"} Jan 21 11:02:04 crc kubenswrapper[5119]: I0121 11:02:04.624037 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483222-hnfhz" Jan 21 11:02:04 crc kubenswrapper[5119]: I0121 11:02:04.768771 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9dzt\" (UniqueName: \"kubernetes.io/projected/e16ebb15-774e-406c-8c4d-640547741054-kube-api-access-v9dzt\") pod \"e16ebb15-774e-406c-8c4d-640547741054\" (UID: \"e16ebb15-774e-406c-8c4d-640547741054\") " Jan 21 11:02:04 crc kubenswrapper[5119]: I0121 11:02:04.775122 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16ebb15-774e-406c-8c4d-640547741054-kube-api-access-v9dzt" (OuterVolumeSpecName: "kube-api-access-v9dzt") pod "e16ebb15-774e-406c-8c4d-640547741054" (UID: "e16ebb15-774e-406c-8c4d-640547741054"). InnerVolumeSpecName "kube-api-access-v9dzt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:02:04 crc kubenswrapper[5119]: I0121 11:02:04.871757 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v9dzt\" (UniqueName: \"kubernetes.io/projected/e16ebb15-774e-406c-8c4d-640547741054-kube-api-access-v9dzt\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:05 crc kubenswrapper[5119]: I0121 11:02:05.381515 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483222-hnfhz" event={"ID":"e16ebb15-774e-406c-8c4d-640547741054","Type":"ContainerDied","Data":"8021a52da5c27254da6a955d3ac3649d3f7e3ecca86ccf00032f16ebf834ffb3"} Jan 21 11:02:05 crc kubenswrapper[5119]: I0121 11:02:05.381565 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8021a52da5c27254da6a955d3ac3649d3f7e3ecca86ccf00032f16ebf834ffb3" Jan 21 11:02:05 crc kubenswrapper[5119]: I0121 11:02:05.381664 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483222-hnfhz" Jan 21 11:02:05 crc kubenswrapper[5119]: I0121 11:02:05.430929 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483216-dncs5"] Jan 21 11:02:05 crc kubenswrapper[5119]: I0121 11:02:05.436751 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483216-dncs5"] Jan 21 11:02:06 crc kubenswrapper[5119]: I0121 11:02:06.599793 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c888c8c-4960-4b7d-a7d3-122c21b7bf09" path="/var/lib/kubelet/pods/9c888c8c-4960-4b7d-a7d3-122c21b7bf09/volumes" Jan 21 11:02:08 crc kubenswrapper[5119]: I0121 11:02:08.591243 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:02:08 crc kubenswrapper[5119]: E0121 11:02:08.591752 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:02:22 crc kubenswrapper[5119]: I0121 11:02:22.602309 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:02:22 crc kubenswrapper[5119]: E0121 11:02:22.603236 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:02:35 crc kubenswrapper[5119]: I0121 11:02:35.591316 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:02:35 crc kubenswrapper[5119]: E0121 11:02:35.591990 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:02:46 crc kubenswrapper[5119]: I0121 11:02:46.219215 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-vllfx"] Jan 21 11:02:46 crc kubenswrapper[5119]: I0121 11:02:46.220691 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e16ebb15-774e-406c-8c4d-640547741054" containerName="oc" Jan 21 11:02:46 crc kubenswrapper[5119]: I0121 11:02:46.220708 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16ebb15-774e-406c-8c4d-640547741054" containerName="oc" Jan 21 11:02:46 crc kubenswrapper[5119]: I0121 11:02:46.220851 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="e16ebb15-774e-406c-8c4d-640547741054" containerName="oc" Jan 21 11:02:46 crc kubenswrapper[5119]: I0121 11:02:46.287206 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vllfx" Jan 21 11:02:46 crc kubenswrapper[5119]: I0121 11:02:46.286976 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-vllfx"] Jan 21 11:02:46 crc kubenswrapper[5119]: I0121 11:02:46.413893 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmvbm\" (UniqueName: \"kubernetes.io/projected/a8610d44-42ca-4edb-b6f2-3e629fc9618c-kube-api-access-nmvbm\") pod \"infrawatch-operators-vllfx\" (UID: \"a8610d44-42ca-4edb-b6f2-3e629fc9618c\") " pod="service-telemetry/infrawatch-operators-vllfx" Jan 21 11:02:46 crc kubenswrapper[5119]: I0121 11:02:46.515135 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nmvbm\" (UniqueName: \"kubernetes.io/projected/a8610d44-42ca-4edb-b6f2-3e629fc9618c-kube-api-access-nmvbm\") pod \"infrawatch-operators-vllfx\" (UID: \"a8610d44-42ca-4edb-b6f2-3e629fc9618c\") " pod="service-telemetry/infrawatch-operators-vllfx" Jan 21 11:02:46 crc kubenswrapper[5119]: I0121 11:02:46.547865 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmvbm\" (UniqueName: \"kubernetes.io/projected/a8610d44-42ca-4edb-b6f2-3e629fc9618c-kube-api-access-nmvbm\") pod \"infrawatch-operators-vllfx\" (UID: \"a8610d44-42ca-4edb-b6f2-3e629fc9618c\") " pod="service-telemetry/infrawatch-operators-vllfx" Jan 21 11:02:46 crc kubenswrapper[5119]: I0121 11:02:46.603495 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vllfx" Jan 21 11:02:46 crc kubenswrapper[5119]: I0121 11:02:46.812394 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-vllfx"] Jan 21 11:02:47 crc kubenswrapper[5119]: I0121 11:02:47.591333 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:02:47 crc kubenswrapper[5119]: E0121 11:02:47.592087 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:02:47 crc kubenswrapper[5119]: I0121 11:02:47.719254 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vllfx" event={"ID":"a8610d44-42ca-4edb-b6f2-3e629fc9618c","Type":"ContainerStarted","Data":"789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566"} Jan 21 11:02:47 crc kubenswrapper[5119]: I0121 11:02:47.719571 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vllfx" event={"ID":"a8610d44-42ca-4edb-b6f2-3e629fc9618c","Type":"ContainerStarted","Data":"9e09fb604e028f0234920e625f4a8f06e762506fa4639b778cdb26cc533ba661"} Jan 21 11:02:47 crc kubenswrapper[5119]: I0121 11:02:47.740688 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-vllfx" podStartSLOduration=1.629922438 podStartE2EDuration="1.740672181s" podCreationTimestamp="2026-01-21 11:02:46 +0000 UTC" firstStartedPulling="2026-01-21 11:02:46.815278269 +0000 UTC m=+4082.483369947" lastFinishedPulling="2026-01-21 11:02:46.926028012 +0000 UTC m=+4082.594119690" observedRunningTime="2026-01-21 11:02:47.734507342 +0000 UTC m=+4083.402599020" watchObservedRunningTime="2026-01-21 11:02:47.740672181 +0000 UTC m=+4083.408763859" Jan 21 11:02:54 crc kubenswrapper[5119]: I0121 11:02:54.417262 5119 scope.go:117] "RemoveContainer" containerID="2949b0132fa607130ab58dfcfec2e4fb35ec6b08ee3c7a4dafa9cdef18898aee" Jan 21 11:02:56 crc kubenswrapper[5119]: I0121 11:02:56.604637 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-vllfx" Jan 21 11:02:56 crc kubenswrapper[5119]: I0121 11:02:56.610270 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-vllfx" Jan 21 11:02:56 crc kubenswrapper[5119]: I0121 11:02:56.639780 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-vllfx" Jan 21 11:02:56 crc kubenswrapper[5119]: I0121 11:02:56.826591 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-vllfx" Jan 21 11:02:56 crc kubenswrapper[5119]: I0121 11:02:56.879436 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-vllfx"] Jan 21 11:02:58 crc kubenswrapper[5119]: I0121 11:02:58.810076 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-vllfx" podUID="a8610d44-42ca-4edb-b6f2-3e629fc9618c" containerName="registry-server" containerID="cri-o://789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566" gracePeriod=2 Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.153257 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vllfx" Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.216447 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmvbm\" (UniqueName: \"kubernetes.io/projected/a8610d44-42ca-4edb-b6f2-3e629fc9618c-kube-api-access-nmvbm\") pod \"a8610d44-42ca-4edb-b6f2-3e629fc9618c\" (UID: \"a8610d44-42ca-4edb-b6f2-3e629fc9618c\") " Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.222473 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8610d44-42ca-4edb-b6f2-3e629fc9618c-kube-api-access-nmvbm" (OuterVolumeSpecName: "kube-api-access-nmvbm") pod "a8610d44-42ca-4edb-b6f2-3e629fc9618c" (UID: "a8610d44-42ca-4edb-b6f2-3e629fc9618c"). InnerVolumeSpecName "kube-api-access-nmvbm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.318448 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmvbm\" (UniqueName: \"kubernetes.io/projected/a8610d44-42ca-4edb-b6f2-3e629fc9618c-kube-api-access-nmvbm\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.591296 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:02:59 crc kubenswrapper[5119]: E0121 11:02:59.591769 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.819732 5119 generic.go:358] "Generic (PLEG): container finished" podID="a8610d44-42ca-4edb-b6f2-3e629fc9618c" containerID="789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566" exitCode=0 Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.820009 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vllfx" event={"ID":"a8610d44-42ca-4edb-b6f2-3e629fc9618c","Type":"ContainerDied","Data":"789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566"} Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.820039 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vllfx" event={"ID":"a8610d44-42ca-4edb-b6f2-3e629fc9618c","Type":"ContainerDied","Data":"9e09fb604e028f0234920e625f4a8f06e762506fa4639b778cdb26cc533ba661"} Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.820059 5119 scope.go:117] "RemoveContainer" containerID="789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566" Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.820273 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vllfx" Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.853062 5119 scope.go:117] "RemoveContainer" containerID="789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566" Jan 21 11:02:59 crc kubenswrapper[5119]: E0121 11:02:59.853845 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566\": container with ID starting with 789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566 not found: ID does not exist" containerID="789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566" Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.853904 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566"} err="failed to get container status \"789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566\": rpc error: code = NotFound desc = could not find container \"789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566\": container with ID starting with 789e04bfa4e8937c834bd49249ba184c7e94beefa6ccfb2ab09efdded9801566 not found: ID does not exist" Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.853939 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-vllfx"] Jan 21 11:02:59 crc kubenswrapper[5119]: I0121 11:02:59.861976 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-vllfx"] Jan 21 11:03:00 crc kubenswrapper[5119]: I0121 11:03:00.599302 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8610d44-42ca-4edb-b6f2-3e629fc9618c" path="/var/lib/kubelet/pods/a8610d44-42ca-4edb-b6f2-3e629fc9618c/volumes" Jan 21 11:03:11 crc kubenswrapper[5119]: I0121 11:03:11.592302 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:03:11 crc kubenswrapper[5119]: E0121 11:03:11.592863 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:03:24 crc kubenswrapper[5119]: I0121 11:03:24.596377 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:03:24 crc kubenswrapper[5119]: E0121 11:03:24.597113 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:03:35 crc kubenswrapper[5119]: I0121 11:03:35.591499 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:03:35 crc kubenswrapper[5119]: E0121 11:03:35.592073 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:03:46 crc kubenswrapper[5119]: I0121 11:03:46.591274 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:03:46 crc kubenswrapper[5119]: E0121 11:03:46.592566 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.136795 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483224-q527q"] Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.140862 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a8610d44-42ca-4edb-b6f2-3e629fc9618c" containerName="registry-server" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.140897 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8610d44-42ca-4edb-b6f2-3e629fc9618c" containerName="registry-server" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.141028 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="a8610d44-42ca-4edb-b6f2-3e629fc9618c" containerName="registry-server" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.155099 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483224-q527q"] Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.155224 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483224-q527q" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.173714 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.173791 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.173821 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.288324 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24zfg\" (UniqueName: \"kubernetes.io/projected/676b1f57-46ad-4f31-b5f4-fc3590b0458d-kube-api-access-24zfg\") pod \"auto-csr-approver-29483224-q527q\" (UID: \"676b1f57-46ad-4f31-b5f4-fc3590b0458d\") " pod="openshift-infra/auto-csr-approver-29483224-q527q" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.390485 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-24zfg\" (UniqueName: \"kubernetes.io/projected/676b1f57-46ad-4f31-b5f4-fc3590b0458d-kube-api-access-24zfg\") pod \"auto-csr-approver-29483224-q527q\" (UID: \"676b1f57-46ad-4f31-b5f4-fc3590b0458d\") " pod="openshift-infra/auto-csr-approver-29483224-q527q" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.411498 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-24zfg\" (UniqueName: \"kubernetes.io/projected/676b1f57-46ad-4f31-b5f4-fc3590b0458d-kube-api-access-24zfg\") pod \"auto-csr-approver-29483224-q527q\" (UID: \"676b1f57-46ad-4f31-b5f4-fc3590b0458d\") " pod="openshift-infra/auto-csr-approver-29483224-q527q" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.501308 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483224-q527q" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.604954 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:04:00 crc kubenswrapper[5119]: E0121 11:04:00.605196 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:04:00 crc kubenswrapper[5119]: I0121 11:04:00.680676 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483224-q527q"] Jan 21 11:04:01 crc kubenswrapper[5119]: I0121 11:04:01.264641 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483224-q527q" event={"ID":"676b1f57-46ad-4f31-b5f4-fc3590b0458d","Type":"ContainerStarted","Data":"55b69a1648072f00fad6603cb0e4c40f5f53aa74d1acb72ef7e83142e704b178"} Jan 21 11:04:02 crc kubenswrapper[5119]: I0121 11:04:02.273951 5119 generic.go:358] "Generic (PLEG): container finished" podID="676b1f57-46ad-4f31-b5f4-fc3590b0458d" containerID="8d3cb039b77417d80cb6a24768a3944ad13e905a35127b2292affaac3680c7d0" exitCode=0 Jan 21 11:04:02 crc kubenswrapper[5119]: I0121 11:04:02.274015 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483224-q527q" event={"ID":"676b1f57-46ad-4f31-b5f4-fc3590b0458d","Type":"ContainerDied","Data":"8d3cb039b77417d80cb6a24768a3944ad13e905a35127b2292affaac3680c7d0"} Jan 21 11:04:03 crc kubenswrapper[5119]: I0121 11:04:03.512186 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483224-q527q" Jan 21 11:04:03 crc kubenswrapper[5119]: I0121 11:04:03.528083 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24zfg\" (UniqueName: \"kubernetes.io/projected/676b1f57-46ad-4f31-b5f4-fc3590b0458d-kube-api-access-24zfg\") pod \"676b1f57-46ad-4f31-b5f4-fc3590b0458d\" (UID: \"676b1f57-46ad-4f31-b5f4-fc3590b0458d\") " Jan 21 11:04:03 crc kubenswrapper[5119]: I0121 11:04:03.533936 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/676b1f57-46ad-4f31-b5f4-fc3590b0458d-kube-api-access-24zfg" (OuterVolumeSpecName: "kube-api-access-24zfg") pod "676b1f57-46ad-4f31-b5f4-fc3590b0458d" (UID: "676b1f57-46ad-4f31-b5f4-fc3590b0458d"). InnerVolumeSpecName "kube-api-access-24zfg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:04:03 crc kubenswrapper[5119]: I0121 11:04:03.630114 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-24zfg\" (UniqueName: \"kubernetes.io/projected/676b1f57-46ad-4f31-b5f4-fc3590b0458d-kube-api-access-24zfg\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:04 crc kubenswrapper[5119]: I0121 11:04:04.289270 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483224-q527q" Jan 21 11:04:04 crc kubenswrapper[5119]: I0121 11:04:04.289320 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483224-q527q" event={"ID":"676b1f57-46ad-4f31-b5f4-fc3590b0458d","Type":"ContainerDied","Data":"55b69a1648072f00fad6603cb0e4c40f5f53aa74d1acb72ef7e83142e704b178"} Jan 21 11:04:04 crc kubenswrapper[5119]: I0121 11:04:04.289359 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55b69a1648072f00fad6603cb0e4c40f5f53aa74d1acb72ef7e83142e704b178" Jan 21 11:04:04 crc kubenswrapper[5119]: I0121 11:04:04.571221 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483218-t8hlk"] Jan 21 11:04:04 crc kubenswrapper[5119]: I0121 11:04:04.576419 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483218-t8hlk"] Jan 21 11:04:04 crc kubenswrapper[5119]: I0121 11:04:04.599997 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06d8398a-ac2e-49e1-a854-f96510b839a1" path="/var/lib/kubelet/pods/06d8398a-ac2e-49e1-a854-f96510b839a1/volumes" Jan 21 11:04:12 crc kubenswrapper[5119]: I0121 11:04:12.591678 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:04:12 crc kubenswrapper[5119]: E0121 11:04:12.592323 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:04:23 crc kubenswrapper[5119]: I0121 11:04:23.592061 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:04:23 crc kubenswrapper[5119]: E0121 11:04:23.592718 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:04:35 crc kubenswrapper[5119]: I0121 11:04:35.591205 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:04:35 crc kubenswrapper[5119]: E0121 11:04:35.591979 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:04:47 crc kubenswrapper[5119]: I0121 11:04:47.059710 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 11:04:47 crc kubenswrapper[5119]: I0121 11:04:47.062109 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 11:04:47 crc kubenswrapper[5119]: I0121 11:04:47.065244 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 11:04:47 crc kubenswrapper[5119]: I0121 11:04:47.066180 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 11:04:48 crc kubenswrapper[5119]: I0121 11:04:48.591088 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:04:48 crc kubenswrapper[5119]: E0121 11:04:48.591721 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:04:54 crc kubenswrapper[5119]: I0121 11:04:54.552976 5119 scope.go:117] "RemoveContainer" containerID="b134f0350900494a38e1131767102b55dcf3479692be6384ece7256cdb31d9cd" Jan 21 11:05:02 crc kubenswrapper[5119]: I0121 11:05:02.593070 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:05:02 crc kubenswrapper[5119]: E0121 11:05:02.593825 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:05:16 crc kubenswrapper[5119]: I0121 11:05:16.590661 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:05:16 crc kubenswrapper[5119]: E0121 11:05:16.591537 5119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5vwrk_openshift-machine-config-operator(f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" Jan 21 11:05:30 crc kubenswrapper[5119]: I0121 11:05:30.590971 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:05:31 crc kubenswrapper[5119]: I0121 11:05:31.904066 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"20d2df21174c09dd6490e1829cf9cd3cf48e6a78f413696ad837c4e5c9e9ab1e"} Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.134650 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483226-79v5w"] Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.137318 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="676b1f57-46ad-4f31-b5f4-fc3590b0458d" containerName="oc" Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.137421 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="676b1f57-46ad-4f31-b5f4-fc3590b0458d" containerName="oc" Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.137672 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="676b1f57-46ad-4f31-b5f4-fc3590b0458d" containerName="oc" Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.152946 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483226-79v5w"] Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.153280 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483226-79v5w" Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.155349 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.155401 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.155349 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.239923 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pztn8\" (UniqueName: \"kubernetes.io/projected/a1fb27dd-dda2-483e-b6a0-5de639b8f55e-kube-api-access-pztn8\") pod \"auto-csr-approver-29483226-79v5w\" (UID: \"a1fb27dd-dda2-483e-b6a0-5de639b8f55e\") " pod="openshift-infra/auto-csr-approver-29483226-79v5w" Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.342276 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pztn8\" (UniqueName: \"kubernetes.io/projected/a1fb27dd-dda2-483e-b6a0-5de639b8f55e-kube-api-access-pztn8\") pod \"auto-csr-approver-29483226-79v5w\" (UID: \"a1fb27dd-dda2-483e-b6a0-5de639b8f55e\") " pod="openshift-infra/auto-csr-approver-29483226-79v5w" Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.367236 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pztn8\" (UniqueName: \"kubernetes.io/projected/a1fb27dd-dda2-483e-b6a0-5de639b8f55e-kube-api-access-pztn8\") pod \"auto-csr-approver-29483226-79v5w\" (UID: \"a1fb27dd-dda2-483e-b6a0-5de639b8f55e\") " pod="openshift-infra/auto-csr-approver-29483226-79v5w" Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.478182 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483226-79v5w" Jan 21 11:06:00 crc kubenswrapper[5119]: I0121 11:06:00.880615 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483226-79v5w"] Jan 21 11:06:01 crc kubenswrapper[5119]: I0121 11:06:01.111989 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483226-79v5w" event={"ID":"a1fb27dd-dda2-483e-b6a0-5de639b8f55e","Type":"ContainerStarted","Data":"ef9143201b9afd286264454d5bdffc23217161e9e2c9ee6e25df3c1e5b46b45e"} Jan 21 11:06:03 crc kubenswrapper[5119]: I0121 11:06:03.127523 5119 generic.go:358] "Generic (PLEG): container finished" podID="a1fb27dd-dda2-483e-b6a0-5de639b8f55e" containerID="bc911407e0182b49540302436aff46696022e31dfa0466d5f64b17bf13672edf" exitCode=0 Jan 21 11:06:03 crc kubenswrapper[5119]: I0121 11:06:03.127583 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483226-79v5w" event={"ID":"a1fb27dd-dda2-483e-b6a0-5de639b8f55e","Type":"ContainerDied","Data":"bc911407e0182b49540302436aff46696022e31dfa0466d5f64b17bf13672edf"} Jan 21 11:06:04 crc kubenswrapper[5119]: I0121 11:06:04.463420 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483226-79v5w" Jan 21 11:06:04 crc kubenswrapper[5119]: I0121 11:06:04.602912 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pztn8\" (UniqueName: \"kubernetes.io/projected/a1fb27dd-dda2-483e-b6a0-5de639b8f55e-kube-api-access-pztn8\") pod \"a1fb27dd-dda2-483e-b6a0-5de639b8f55e\" (UID: \"a1fb27dd-dda2-483e-b6a0-5de639b8f55e\") " Jan 21 11:06:04 crc kubenswrapper[5119]: I0121 11:06:04.614396 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1fb27dd-dda2-483e-b6a0-5de639b8f55e-kube-api-access-pztn8" (OuterVolumeSpecName: "kube-api-access-pztn8") pod "a1fb27dd-dda2-483e-b6a0-5de639b8f55e" (UID: "a1fb27dd-dda2-483e-b6a0-5de639b8f55e"). InnerVolumeSpecName "kube-api-access-pztn8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:06:04 crc kubenswrapper[5119]: I0121 11:06:04.705994 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pztn8\" (UniqueName: \"kubernetes.io/projected/a1fb27dd-dda2-483e-b6a0-5de639b8f55e-kube-api-access-pztn8\") on node \"crc\" DevicePath \"\"" Jan 21 11:06:05 crc kubenswrapper[5119]: I0121 11:06:05.143632 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483226-79v5w" Jan 21 11:06:05 crc kubenswrapper[5119]: I0121 11:06:05.143661 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483226-79v5w" event={"ID":"a1fb27dd-dda2-483e-b6a0-5de639b8f55e","Type":"ContainerDied","Data":"ef9143201b9afd286264454d5bdffc23217161e9e2c9ee6e25df3c1e5b46b45e"} Jan 21 11:06:05 crc kubenswrapper[5119]: I0121 11:06:05.143707 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef9143201b9afd286264454d5bdffc23217161e9e2c9ee6e25df3c1e5b46b45e" Jan 21 11:06:05 crc kubenswrapper[5119]: I0121 11:06:05.530990 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483220-4x8w9"] Jan 21 11:06:05 crc kubenswrapper[5119]: I0121 11:06:05.537265 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483220-4x8w9"] Jan 21 11:06:06 crc kubenswrapper[5119]: I0121 11:06:06.600545 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a203e08-866e-47d6-b57f-37cbafc005f9" path="/var/lib/kubelet/pods/5a203e08-866e-47d6-b57f-37cbafc005f9/volumes" Jan 21 11:06:54 crc kubenswrapper[5119]: I0121 11:06:54.711977 5119 scope.go:117] "RemoveContainer" containerID="49cd63696e0d2fbb7609c9372de90327a56299f76c547185c0fa0b039add8204" Jan 21 11:07:49 crc kubenswrapper[5119]: I0121 11:07:49.918875 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:07:49 crc kubenswrapper[5119]: I0121 11:07:49.919477 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.128759 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483228-2c4l7"] Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.130135 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a1fb27dd-dda2-483e-b6a0-5de639b8f55e" containerName="oc" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.130155 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb27dd-dda2-483e-b6a0-5de639b8f55e" containerName="oc" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.130307 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="a1fb27dd-dda2-483e-b6a0-5de639b8f55e" containerName="oc" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.137419 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483228-2c4l7" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.138658 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483228-2c4l7"] Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.139262 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.139371 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.140736 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.201130 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw4nd\" (UniqueName: \"kubernetes.io/projected/13f319bd-4cee-488b-be9d-a12fdbcffcba-kube-api-access-mw4nd\") pod \"auto-csr-approver-29483228-2c4l7\" (UID: \"13f319bd-4cee-488b-be9d-a12fdbcffcba\") " pod="openshift-infra/auto-csr-approver-29483228-2c4l7" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.302595 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mw4nd\" (UniqueName: \"kubernetes.io/projected/13f319bd-4cee-488b-be9d-a12fdbcffcba-kube-api-access-mw4nd\") pod \"auto-csr-approver-29483228-2c4l7\" (UID: \"13f319bd-4cee-488b-be9d-a12fdbcffcba\") " pod="openshift-infra/auto-csr-approver-29483228-2c4l7" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.335542 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw4nd\" (UniqueName: \"kubernetes.io/projected/13f319bd-4cee-488b-be9d-a12fdbcffcba-kube-api-access-mw4nd\") pod \"auto-csr-approver-29483228-2c4l7\" (UID: \"13f319bd-4cee-488b-be9d-a12fdbcffcba\") " pod="openshift-infra/auto-csr-approver-29483228-2c4l7" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.453330 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483228-2c4l7" Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.646525 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483228-2c4l7"] Jan 21 11:08:00 crc kubenswrapper[5119]: I0121 11:08:00.658482 5119 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:08:01 crc kubenswrapper[5119]: I0121 11:08:01.017623 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483228-2c4l7" event={"ID":"13f319bd-4cee-488b-be9d-a12fdbcffcba","Type":"ContainerStarted","Data":"5267bdec8812f7f6babd05d7d5836c6628dcada3a51d1d1e298a402012bbf77c"} Jan 21 11:08:02 crc kubenswrapper[5119]: I0121 11:08:02.024485 5119 generic.go:358] "Generic (PLEG): container finished" podID="13f319bd-4cee-488b-be9d-a12fdbcffcba" containerID="68cd4643f3ec8097e4f0741ceaf8795e055094d71802f4597a5112497c9b97a0" exitCode=0 Jan 21 11:08:02 crc kubenswrapper[5119]: I0121 11:08:02.024909 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483228-2c4l7" event={"ID":"13f319bd-4cee-488b-be9d-a12fdbcffcba","Type":"ContainerDied","Data":"68cd4643f3ec8097e4f0741ceaf8795e055094d71802f4597a5112497c9b97a0"} Jan 21 11:08:03 crc kubenswrapper[5119]: I0121 11:08:03.332863 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483228-2c4l7" Jan 21 11:08:03 crc kubenswrapper[5119]: I0121 11:08:03.445881 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw4nd\" (UniqueName: \"kubernetes.io/projected/13f319bd-4cee-488b-be9d-a12fdbcffcba-kube-api-access-mw4nd\") pod \"13f319bd-4cee-488b-be9d-a12fdbcffcba\" (UID: \"13f319bd-4cee-488b-be9d-a12fdbcffcba\") " Jan 21 11:08:03 crc kubenswrapper[5119]: I0121 11:08:03.451215 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f319bd-4cee-488b-be9d-a12fdbcffcba-kube-api-access-mw4nd" (OuterVolumeSpecName: "kube-api-access-mw4nd") pod "13f319bd-4cee-488b-be9d-a12fdbcffcba" (UID: "13f319bd-4cee-488b-be9d-a12fdbcffcba"). InnerVolumeSpecName "kube-api-access-mw4nd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:08:03 crc kubenswrapper[5119]: I0121 11:08:03.548844 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mw4nd\" (UniqueName: \"kubernetes.io/projected/13f319bd-4cee-488b-be9d-a12fdbcffcba-kube-api-access-mw4nd\") on node \"crc\" DevicePath \"\"" Jan 21 11:08:04 crc kubenswrapper[5119]: I0121 11:08:04.038732 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483228-2c4l7" Jan 21 11:08:04 crc kubenswrapper[5119]: I0121 11:08:04.038741 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483228-2c4l7" event={"ID":"13f319bd-4cee-488b-be9d-a12fdbcffcba","Type":"ContainerDied","Data":"5267bdec8812f7f6babd05d7d5836c6628dcada3a51d1d1e298a402012bbf77c"} Jan 21 11:08:04 crc kubenswrapper[5119]: I0121 11:08:04.038828 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5267bdec8812f7f6babd05d7d5836c6628dcada3a51d1d1e298a402012bbf77c" Jan 21 11:08:04 crc kubenswrapper[5119]: I0121 11:08:04.392179 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483222-hnfhz"] Jan 21 11:08:04 crc kubenswrapper[5119]: I0121 11:08:04.398103 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483222-hnfhz"] Jan 21 11:08:04 crc kubenswrapper[5119]: I0121 11:08:04.601033 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16ebb15-774e-406c-8c4d-640547741054" path="/var/lib/kubelet/pods/e16ebb15-774e-406c-8c4d-640547741054/volumes" Jan 21 11:08:14 crc kubenswrapper[5119]: I0121 11:08:14.343142 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-h7bsl"] Jan 21 11:08:14 crc kubenswrapper[5119]: I0121 11:08:14.344454 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="13f319bd-4cee-488b-be9d-a12fdbcffcba" containerName="oc" Jan 21 11:08:14 crc kubenswrapper[5119]: I0121 11:08:14.344468 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f319bd-4cee-488b-be9d-a12fdbcffcba" containerName="oc" Jan 21 11:08:14 crc kubenswrapper[5119]: I0121 11:08:14.344614 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="13f319bd-4cee-488b-be9d-a12fdbcffcba" containerName="oc" Jan 21 11:08:14 crc kubenswrapper[5119]: I0121 11:08:14.348458 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h7bsl" Jan 21 11:08:14 crc kubenswrapper[5119]: I0121 11:08:14.350475 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-h7bsl"] Jan 21 11:08:14 crc kubenswrapper[5119]: I0121 11:08:14.521847 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbcx8\" (UniqueName: \"kubernetes.io/projected/c5805dea-d12d-4e5e-a96b-443c7a4f944b-kube-api-access-xbcx8\") pod \"infrawatch-operators-h7bsl\" (UID: \"c5805dea-d12d-4e5e-a96b-443c7a4f944b\") " pod="service-telemetry/infrawatch-operators-h7bsl" Jan 21 11:08:14 crc kubenswrapper[5119]: I0121 11:08:14.623013 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xbcx8\" (UniqueName: \"kubernetes.io/projected/c5805dea-d12d-4e5e-a96b-443c7a4f944b-kube-api-access-xbcx8\") pod \"infrawatch-operators-h7bsl\" (UID: \"c5805dea-d12d-4e5e-a96b-443c7a4f944b\") " pod="service-telemetry/infrawatch-operators-h7bsl" Jan 21 11:08:14 crc kubenswrapper[5119]: I0121 11:08:14.642047 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbcx8\" (UniqueName: \"kubernetes.io/projected/c5805dea-d12d-4e5e-a96b-443c7a4f944b-kube-api-access-xbcx8\") pod \"infrawatch-operators-h7bsl\" (UID: \"c5805dea-d12d-4e5e-a96b-443c7a4f944b\") " pod="service-telemetry/infrawatch-operators-h7bsl" Jan 21 11:08:14 crc kubenswrapper[5119]: I0121 11:08:14.665807 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h7bsl" Jan 21 11:08:15 crc kubenswrapper[5119]: I0121 11:08:15.120519 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-h7bsl"] Jan 21 11:08:16 crc kubenswrapper[5119]: I0121 11:08:16.123853 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h7bsl" event={"ID":"c5805dea-d12d-4e5e-a96b-443c7a4f944b","Type":"ContainerStarted","Data":"ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62"} Jan 21 11:08:16 crc kubenswrapper[5119]: I0121 11:08:16.123904 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h7bsl" event={"ID":"c5805dea-d12d-4e5e-a96b-443c7a4f944b","Type":"ContainerStarted","Data":"e1f28168cc010a6c9a79e2cb2e06877f61a9aa5e48820a0328a61864520ec1ae"} Jan 21 11:08:16 crc kubenswrapper[5119]: I0121 11:08:16.139693 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-h7bsl" podStartSLOduration=2.057323994 podStartE2EDuration="2.139676603s" podCreationTimestamp="2026-01-21 11:08:14 +0000 UTC" firstStartedPulling="2026-01-21 11:08:15.136205149 +0000 UTC m=+4410.804296827" lastFinishedPulling="2026-01-21 11:08:15.218557758 +0000 UTC m=+4410.886649436" observedRunningTime="2026-01-21 11:08:16.137321069 +0000 UTC m=+4411.805412737" watchObservedRunningTime="2026-01-21 11:08:16.139676603 +0000 UTC m=+4411.807768281" Jan 21 11:08:19 crc kubenswrapper[5119]: I0121 11:08:19.919772 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:08:19 crc kubenswrapper[5119]: I0121 11:08:19.920200 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:08:24 crc kubenswrapper[5119]: I0121 11:08:24.666799 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-h7bsl" Jan 21 11:08:24 crc kubenswrapper[5119]: I0121 11:08:24.667274 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-h7bsl" Jan 21 11:08:24 crc kubenswrapper[5119]: I0121 11:08:24.694541 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-h7bsl" Jan 21 11:08:25 crc kubenswrapper[5119]: I0121 11:08:25.222757 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-h7bsl" Jan 21 11:08:25 crc kubenswrapper[5119]: I0121 11:08:25.284470 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-h7bsl"] Jan 21 11:08:27 crc kubenswrapper[5119]: I0121 11:08:27.203703 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-h7bsl" podUID="c5805dea-d12d-4e5e-a96b-443c7a4f944b" containerName="registry-server" containerID="cri-o://ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62" gracePeriod=2 Jan 21 11:08:27 crc kubenswrapper[5119]: I0121 11:08:27.578525 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h7bsl" Jan 21 11:08:27 crc kubenswrapper[5119]: I0121 11:08:27.612051 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbcx8\" (UniqueName: \"kubernetes.io/projected/c5805dea-d12d-4e5e-a96b-443c7a4f944b-kube-api-access-xbcx8\") pod \"c5805dea-d12d-4e5e-a96b-443c7a4f944b\" (UID: \"c5805dea-d12d-4e5e-a96b-443c7a4f944b\") " Jan 21 11:08:27 crc kubenswrapper[5119]: I0121 11:08:27.619917 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5805dea-d12d-4e5e-a96b-443c7a4f944b-kube-api-access-xbcx8" (OuterVolumeSpecName: "kube-api-access-xbcx8") pod "c5805dea-d12d-4e5e-a96b-443c7a4f944b" (UID: "c5805dea-d12d-4e5e-a96b-443c7a4f944b"). InnerVolumeSpecName "kube-api-access-xbcx8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:08:27 crc kubenswrapper[5119]: I0121 11:08:27.714497 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xbcx8\" (UniqueName: \"kubernetes.io/projected/c5805dea-d12d-4e5e-a96b-443c7a4f944b-kube-api-access-xbcx8\") on node \"crc\" DevicePath \"\"" Jan 21 11:08:28 crc kubenswrapper[5119]: I0121 11:08:28.214941 5119 generic.go:358] "Generic (PLEG): container finished" podID="c5805dea-d12d-4e5e-a96b-443c7a4f944b" containerID="ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62" exitCode=0 Jan 21 11:08:28 crc kubenswrapper[5119]: I0121 11:08:28.215331 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h7bsl" event={"ID":"c5805dea-d12d-4e5e-a96b-443c7a4f944b","Type":"ContainerDied","Data":"ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62"} Jan 21 11:08:28 crc kubenswrapper[5119]: I0121 11:08:28.215356 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h7bsl" event={"ID":"c5805dea-d12d-4e5e-a96b-443c7a4f944b","Type":"ContainerDied","Data":"e1f28168cc010a6c9a79e2cb2e06877f61a9aa5e48820a0328a61864520ec1ae"} Jan 21 11:08:28 crc kubenswrapper[5119]: I0121 11:08:28.215372 5119 scope.go:117] "RemoveContainer" containerID="ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62" Jan 21 11:08:28 crc kubenswrapper[5119]: I0121 11:08:28.215497 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h7bsl" Jan 21 11:08:28 crc kubenswrapper[5119]: I0121 11:08:28.253403 5119 scope.go:117] "RemoveContainer" containerID="ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62" Jan 21 11:08:28 crc kubenswrapper[5119]: E0121 11:08:28.253721 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62\": container with ID starting with ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62 not found: ID does not exist" containerID="ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62" Jan 21 11:08:28 crc kubenswrapper[5119]: I0121 11:08:28.253755 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62"} err="failed to get container status \"ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62\": rpc error: code = NotFound desc = could not find container \"ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62\": container with ID starting with ba119668d354c0e25ac2ec9eaf3f6b68156bf54200317c2abb71319d02ac4b62 not found: ID does not exist" Jan 21 11:08:28 crc kubenswrapper[5119]: I0121 11:08:28.253935 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-h7bsl"] Jan 21 11:08:28 crc kubenswrapper[5119]: I0121 11:08:28.259721 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-h7bsl"] Jan 21 11:08:28 crc kubenswrapper[5119]: I0121 11:08:28.599571 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5805dea-d12d-4e5e-a96b-443c7a4f944b" path="/var/lib/kubelet/pods/c5805dea-d12d-4e5e-a96b-443c7a4f944b/volumes" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.582086 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zqkhj"] Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.582830 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c5805dea-d12d-4e5e-a96b-443c7a4f944b" containerName="registry-server" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.582843 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5805dea-d12d-4e5e-a96b-443c7a4f944b" containerName="registry-server" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.582980 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="c5805dea-d12d-4e5e-a96b-443c7a4f944b" containerName="registry-server" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.595020 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.605182 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zqkhj"] Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.761555 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-utilities\") pod \"certified-operators-zqkhj\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.762188 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h6f4\" (UniqueName: \"kubernetes.io/projected/720be45c-867a-4b0c-91ab-1353bf37c545-kube-api-access-5h6f4\") pod \"certified-operators-zqkhj\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.762732 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-catalog-content\") pod \"certified-operators-zqkhj\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.864363 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-catalog-content\") pod \"certified-operators-zqkhj\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.864437 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-utilities\") pod \"certified-operators-zqkhj\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.864504 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5h6f4\" (UniqueName: \"kubernetes.io/projected/720be45c-867a-4b0c-91ab-1353bf37c545-kube-api-access-5h6f4\") pod \"certified-operators-zqkhj\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.864948 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-catalog-content\") pod \"certified-operators-zqkhj\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.865074 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-utilities\") pod \"certified-operators-zqkhj\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.884676 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h6f4\" (UniqueName: \"kubernetes.io/projected/720be45c-867a-4b0c-91ab-1353bf37c545-kube-api-access-5h6f4\") pod \"certified-operators-zqkhj\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:30 crc kubenswrapper[5119]: I0121 11:08:30.913067 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:31 crc kubenswrapper[5119]: I0121 11:08:31.362762 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zqkhj"] Jan 21 11:08:32 crc kubenswrapper[5119]: I0121 11:08:32.261331 5119 generic.go:358] "Generic (PLEG): container finished" podID="720be45c-867a-4b0c-91ab-1353bf37c545" containerID="ab99367c9aaf6f9c55c3e8911198a0e7e0d64f42e08b6dbce2e49eee6b1306ea" exitCode=0 Jan 21 11:08:32 crc kubenswrapper[5119]: I0121 11:08:32.261625 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqkhj" event={"ID":"720be45c-867a-4b0c-91ab-1353bf37c545","Type":"ContainerDied","Data":"ab99367c9aaf6f9c55c3e8911198a0e7e0d64f42e08b6dbce2e49eee6b1306ea"} Jan 21 11:08:32 crc kubenswrapper[5119]: I0121 11:08:32.261658 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqkhj" event={"ID":"720be45c-867a-4b0c-91ab-1353bf37c545","Type":"ContainerStarted","Data":"53cee0645731d086a9065b0f0a9e5841aa3ef33c604e1c757558e3c2b8dfd59b"} Jan 21 11:08:33 crc kubenswrapper[5119]: E0121 11:08:33.464275 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice/crio-3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:08:34 crc kubenswrapper[5119]: I0121 11:08:34.288208 5119 generic.go:358] "Generic (PLEG): container finished" podID="720be45c-867a-4b0c-91ab-1353bf37c545" containerID="3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5" exitCode=0 Jan 21 11:08:34 crc kubenswrapper[5119]: I0121 11:08:34.288870 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqkhj" event={"ID":"720be45c-867a-4b0c-91ab-1353bf37c545","Type":"ContainerDied","Data":"3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5"} Jan 21 11:08:35 crc kubenswrapper[5119]: I0121 11:08:35.298631 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqkhj" event={"ID":"720be45c-867a-4b0c-91ab-1353bf37c545","Type":"ContainerStarted","Data":"325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa"} Jan 21 11:08:40 crc kubenswrapper[5119]: I0121 11:08:40.913808 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:40 crc kubenswrapper[5119]: I0121 11:08:40.914414 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:40 crc kubenswrapper[5119]: I0121 11:08:40.952086 5119 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:40 crc kubenswrapper[5119]: I0121 11:08:40.970032 5119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zqkhj" podStartSLOduration=10.101923189 podStartE2EDuration="10.970013862s" podCreationTimestamp="2026-01-21 11:08:30 +0000 UTC" firstStartedPulling="2026-01-21 11:08:32.262574124 +0000 UTC m=+4427.930665802" lastFinishedPulling="2026-01-21 11:08:33.130664797 +0000 UTC m=+4428.798756475" observedRunningTime="2026-01-21 11:08:35.323563485 +0000 UTC m=+4430.991655163" watchObservedRunningTime="2026-01-21 11:08:40.970013862 +0000 UTC m=+4436.638105540" Jan 21 11:08:41 crc kubenswrapper[5119]: I0121 11:08:41.385702 5119 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:41 crc kubenswrapper[5119]: I0121 11:08:41.424847 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zqkhj"] Jan 21 11:08:43 crc kubenswrapper[5119]: I0121 11:08:43.363899 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zqkhj" podUID="720be45c-867a-4b0c-91ab-1353bf37c545" containerName="registry-server" containerID="cri-o://325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa" gracePeriod=2 Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.244959 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.354320 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-utilities\") pod \"720be45c-867a-4b0c-91ab-1353bf37c545\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.354413 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h6f4\" (UniqueName: \"kubernetes.io/projected/720be45c-867a-4b0c-91ab-1353bf37c545-kube-api-access-5h6f4\") pod \"720be45c-867a-4b0c-91ab-1353bf37c545\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.354513 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-catalog-content\") pod \"720be45c-867a-4b0c-91ab-1353bf37c545\" (UID: \"720be45c-867a-4b0c-91ab-1353bf37c545\") " Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.355644 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-utilities" (OuterVolumeSpecName: "utilities") pod "720be45c-867a-4b0c-91ab-1353bf37c545" (UID: "720be45c-867a-4b0c-91ab-1353bf37c545"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.360002 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/720be45c-867a-4b0c-91ab-1353bf37c545-kube-api-access-5h6f4" (OuterVolumeSpecName: "kube-api-access-5h6f4") pod "720be45c-867a-4b0c-91ab-1353bf37c545" (UID: "720be45c-867a-4b0c-91ab-1353bf37c545"). InnerVolumeSpecName "kube-api-access-5h6f4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.384000 5119 generic.go:358] "Generic (PLEG): container finished" podID="720be45c-867a-4b0c-91ab-1353bf37c545" containerID="325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa" exitCode=0 Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.384058 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqkhj" event={"ID":"720be45c-867a-4b0c-91ab-1353bf37c545","Type":"ContainerDied","Data":"325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa"} Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.384094 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqkhj" event={"ID":"720be45c-867a-4b0c-91ab-1353bf37c545","Type":"ContainerDied","Data":"53cee0645731d086a9065b0f0a9e5841aa3ef33c604e1c757558e3c2b8dfd59b"} Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.384113 5119 scope.go:117] "RemoveContainer" containerID="325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.384162 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqkhj" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.395013 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "720be45c-867a-4b0c-91ab-1353bf37c545" (UID: "720be45c-867a-4b0c-91ab-1353bf37c545"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.402152 5119 scope.go:117] "RemoveContainer" containerID="3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.416751 5119 scope.go:117] "RemoveContainer" containerID="ab99367c9aaf6f9c55c3e8911198a0e7e0d64f42e08b6dbce2e49eee6b1306ea" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.444525 5119 scope.go:117] "RemoveContainer" containerID="325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa" Jan 21 11:08:44 crc kubenswrapper[5119]: E0121 11:08:44.445077 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa\": container with ID starting with 325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa not found: ID does not exist" containerID="325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.445137 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa"} err="failed to get container status \"325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa\": rpc error: code = NotFound desc = could not find container \"325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa\": container with ID starting with 325ec81bdb216d8a067830640704b5631d2336b67b727180a5cd9587035f79aa not found: ID does not exist" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.445156 5119 scope.go:117] "RemoveContainer" containerID="3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5" Jan 21 11:08:44 crc kubenswrapper[5119]: E0121 11:08:44.445322 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5\": container with ID starting with 3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5 not found: ID does not exist" containerID="3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.445347 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5"} err="failed to get container status \"3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5\": rpc error: code = NotFound desc = could not find container \"3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5\": container with ID starting with 3ad3134dde99c48833015550dd32c370aad3eb49c369f6fac631bc2cd73f2da5 not found: ID does not exist" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.445359 5119 scope.go:117] "RemoveContainer" containerID="ab99367c9aaf6f9c55c3e8911198a0e7e0d64f42e08b6dbce2e49eee6b1306ea" Jan 21 11:08:44 crc kubenswrapper[5119]: E0121 11:08:44.445503 5119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab99367c9aaf6f9c55c3e8911198a0e7e0d64f42e08b6dbce2e49eee6b1306ea\": container with ID starting with ab99367c9aaf6f9c55c3e8911198a0e7e0d64f42e08b6dbce2e49eee6b1306ea not found: ID does not exist" containerID="ab99367c9aaf6f9c55c3e8911198a0e7e0d64f42e08b6dbce2e49eee6b1306ea" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.445521 5119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab99367c9aaf6f9c55c3e8911198a0e7e0d64f42e08b6dbce2e49eee6b1306ea"} err="failed to get container status \"ab99367c9aaf6f9c55c3e8911198a0e7e0d64f42e08b6dbce2e49eee6b1306ea\": rpc error: code = NotFound desc = could not find container \"ab99367c9aaf6f9c55c3e8911198a0e7e0d64f42e08b6dbce2e49eee6b1306ea\": container with ID starting with ab99367c9aaf6f9c55c3e8911198a0e7e0d64f42e08b6dbce2e49eee6b1306ea not found: ID does not exist" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.456531 5119 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.456553 5119 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/720be45c-867a-4b0c-91ab-1353bf37c545-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.456564 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5h6f4\" (UniqueName: \"kubernetes.io/projected/720be45c-867a-4b0c-91ab-1353bf37c545-kube-api-access-5h6f4\") on node \"crc\" DevicePath \"\"" Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.706137 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zqkhj"] Jan 21 11:08:44 crc kubenswrapper[5119]: I0121 11:08:44.712027 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zqkhj"] Jan 21 11:08:46 crc kubenswrapper[5119]: I0121 11:08:46.598073 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="720be45c-867a-4b0c-91ab-1353bf37c545" path="/var/lib/kubelet/pods/720be45c-867a-4b0c-91ab-1353bf37c545/volumes" Jan 21 11:08:49 crc kubenswrapper[5119]: I0121 11:08:49.919335 5119 patch_prober.go:28] interesting pod/machine-config-daemon-5vwrk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:08:49 crc kubenswrapper[5119]: I0121 11:08:49.919673 5119 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:08:49 crc kubenswrapper[5119]: I0121 11:08:49.919718 5119 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" Jan 21 11:08:49 crc kubenswrapper[5119]: I0121 11:08:49.920465 5119 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20d2df21174c09dd6490e1829cf9cd3cf48e6a78f413696ad837c4e5c9e9ab1e"} pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:08:49 crc kubenswrapper[5119]: I0121 11:08:49.920550 5119 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" podUID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerName="machine-config-daemon" containerID="cri-o://20d2df21174c09dd6490e1829cf9cd3cf48e6a78f413696ad837c4e5c9e9ab1e" gracePeriod=600 Jan 21 11:08:50 crc kubenswrapper[5119]: I0121 11:08:50.423751 5119 generic.go:358] "Generic (PLEG): container finished" podID="f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5" containerID="20d2df21174c09dd6490e1829cf9cd3cf48e6a78f413696ad837c4e5c9e9ab1e" exitCode=0 Jan 21 11:08:50 crc kubenswrapper[5119]: I0121 11:08:50.423827 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerDied","Data":"20d2df21174c09dd6490e1829cf9cd3cf48e6a78f413696ad837c4e5c9e9ab1e"} Jan 21 11:08:50 crc kubenswrapper[5119]: I0121 11:08:50.424327 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5vwrk" event={"ID":"f3a5f299-f5ad-44f1-ba34-8b43da0a6cd5","Type":"ContainerStarted","Data":"f511be8250499c7378ac028ddde01b91b6774e054622d13144efa839d926296b"} Jan 21 11:08:50 crc kubenswrapper[5119]: I0121 11:08:50.424345 5119 scope.go:117] "RemoveContainer" containerID="dc6c3102c5ffd6452be4e45b52d441c7a60dd687732278c05b39c170cb0a0b6a" Jan 21 11:08:53 crc kubenswrapper[5119]: E0121 11:08:53.821162 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice/crio-53cee0645731d086a9065b0f0a9e5841aa3ef33c604e1c757558e3c2b8dfd59b\": RecentStats: unable to find data in memory cache]" Jan 21 11:08:54 crc kubenswrapper[5119]: I0121 11:08:54.834345 5119 scope.go:117] "RemoveContainer" containerID="a3601342c0d0a26fc8ecd211bf731fc93f6eefbcfdd054e53fe60aa4994c4452" Jan 21 11:09:03 crc kubenswrapper[5119]: E0121 11:09:03.978214 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice/crio-53cee0645731d086a9065b0f0a9e5841aa3ef33c604e1c757558e3c2b8dfd59b\": RecentStats: unable to find data in memory cache]" Jan 21 11:09:14 crc kubenswrapper[5119]: E0121 11:09:14.132014 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice/crio-53cee0645731d086a9065b0f0a9e5841aa3ef33c604e1c757558e3c2b8dfd59b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice\": RecentStats: unable to find data in memory cache]" Jan 21 11:09:24 crc kubenswrapper[5119]: E0121 11:09:24.297108 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice/crio-53cee0645731d086a9065b0f0a9e5841aa3ef33c604e1c757558e3c2b8dfd59b\": RecentStats: unable to find data in memory cache]" Jan 21 11:09:34 crc kubenswrapper[5119]: E0121 11:09:34.491786 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice/crio-53cee0645731d086a9065b0f0a9e5841aa3ef33c604e1c757558e3c2b8dfd59b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice\": RecentStats: unable to find data in memory cache]" Jan 21 11:09:44 crc kubenswrapper[5119]: E0121 11:09:44.639824 5119 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice/crio-53cee0645731d086a9065b0f0a9e5841aa3ef33c604e1c757558e3c2b8dfd59b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720be45c_867a_4b0c_91ab_1353bf37c545.slice\": RecentStats: unable to find data in memory cache]" Jan 21 11:09:47 crc kubenswrapper[5119]: I0121 11:09:47.171183 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 11:09:47 crc kubenswrapper[5119]: I0121 11:09:47.171426 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7d4r9_c3c35acb-afad-4124-a4e6-bf36f963ecbf/kube-multus/0.log" Jan 21 11:09:47 crc kubenswrapper[5119]: I0121 11:09:47.176168 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 11:09:47 crc kubenswrapper[5119]: I0121 11:09:47.176345 5119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.129817 5119 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483230-xll4s"] Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.131750 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="720be45c-867a-4b0c-91ab-1353bf37c545" containerName="extract-utilities" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.131765 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="720be45c-867a-4b0c-91ab-1353bf37c545" containerName="extract-utilities" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.131774 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="720be45c-867a-4b0c-91ab-1353bf37c545" containerName="extract-content" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.131779 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="720be45c-867a-4b0c-91ab-1353bf37c545" containerName="extract-content" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.131813 5119 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="720be45c-867a-4b0c-91ab-1353bf37c545" containerName="registry-server" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.131819 5119 state_mem.go:107] "Deleted CPUSet assignment" podUID="720be45c-867a-4b0c-91ab-1353bf37c545" containerName="registry-server" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.132816 5119 memory_manager.go:356] "RemoveStaleState removing state" podUID="720be45c-867a-4b0c-91ab-1353bf37c545" containerName="registry-server" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.140696 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483230-xll4s" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.142700 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483230-xll4s"] Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.147263 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.147658 5119 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.147852 5119 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-dch56\"" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.276044 5119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmc6f\" (UniqueName: \"kubernetes.io/projected/13703a3a-aa5c-4167-b36c-8fae2af86722-kube-api-access-mmc6f\") pod \"auto-csr-approver-29483230-xll4s\" (UID: \"13703a3a-aa5c-4167-b36c-8fae2af86722\") " pod="openshift-infra/auto-csr-approver-29483230-xll4s" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.377252 5119 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mmc6f\" (UniqueName: \"kubernetes.io/projected/13703a3a-aa5c-4167-b36c-8fae2af86722-kube-api-access-mmc6f\") pod \"auto-csr-approver-29483230-xll4s\" (UID: \"13703a3a-aa5c-4167-b36c-8fae2af86722\") " pod="openshift-infra/auto-csr-approver-29483230-xll4s" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.402789 5119 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmc6f\" (UniqueName: \"kubernetes.io/projected/13703a3a-aa5c-4167-b36c-8fae2af86722-kube-api-access-mmc6f\") pod \"auto-csr-approver-29483230-xll4s\" (UID: \"13703a3a-aa5c-4167-b36c-8fae2af86722\") " pod="openshift-infra/auto-csr-approver-29483230-xll4s" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.463210 5119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483230-xll4s" Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.644385 5119 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483230-xll4s"] Jan 21 11:10:00 crc kubenswrapper[5119]: I0121 11:10:00.937893 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483230-xll4s" event={"ID":"13703a3a-aa5c-4167-b36c-8fae2af86722","Type":"ContainerStarted","Data":"373f60512c31c9b514d199fbfeb04fe6d8ba8a8ecdf44a261360a8f99258a0e2"} Jan 21 11:10:01 crc kubenswrapper[5119]: I0121 11:10:01.945960 5119 generic.go:358] "Generic (PLEG): container finished" podID="13703a3a-aa5c-4167-b36c-8fae2af86722" containerID="634c9b82fa3942fbaeb625282ef7c8ea7b6533a2537f5805753c71564a72a83a" exitCode=0 Jan 21 11:10:01 crc kubenswrapper[5119]: I0121 11:10:01.946084 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483230-xll4s" event={"ID":"13703a3a-aa5c-4167-b36c-8fae2af86722","Type":"ContainerDied","Data":"634c9b82fa3942fbaeb625282ef7c8ea7b6533a2537f5805753c71564a72a83a"} Jan 21 11:10:03 crc kubenswrapper[5119]: I0121 11:10:03.247934 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483230-xll4s" Jan 21 11:10:03 crc kubenswrapper[5119]: I0121 11:10:03.319814 5119 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmc6f\" (UniqueName: \"kubernetes.io/projected/13703a3a-aa5c-4167-b36c-8fae2af86722-kube-api-access-mmc6f\") pod \"13703a3a-aa5c-4167-b36c-8fae2af86722\" (UID: \"13703a3a-aa5c-4167-b36c-8fae2af86722\") " Jan 21 11:10:03 crc kubenswrapper[5119]: I0121 11:10:03.326408 5119 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13703a3a-aa5c-4167-b36c-8fae2af86722-kube-api-access-mmc6f" (OuterVolumeSpecName: "kube-api-access-mmc6f") pod "13703a3a-aa5c-4167-b36c-8fae2af86722" (UID: "13703a3a-aa5c-4167-b36c-8fae2af86722"). InnerVolumeSpecName "kube-api-access-mmc6f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 11:10:03 crc kubenswrapper[5119]: I0121 11:10:03.420893 5119 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mmc6f\" (UniqueName: \"kubernetes.io/projected/13703a3a-aa5c-4167-b36c-8fae2af86722-kube-api-access-mmc6f\") on node \"crc\" DevicePath \"\"" Jan 21 11:10:03 crc kubenswrapper[5119]: I0121 11:10:03.966722 5119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483230-xll4s" Jan 21 11:10:03 crc kubenswrapper[5119]: I0121 11:10:03.967005 5119 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483230-xll4s" event={"ID":"13703a3a-aa5c-4167-b36c-8fae2af86722","Type":"ContainerDied","Data":"373f60512c31c9b514d199fbfeb04fe6d8ba8a8ecdf44a261360a8f99258a0e2"} Jan 21 11:10:03 crc kubenswrapper[5119]: I0121 11:10:03.967043 5119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="373f60512c31c9b514d199fbfeb04fe6d8ba8a8ecdf44a261360a8f99258a0e2" Jan 21 11:10:04 crc kubenswrapper[5119]: I0121 11:10:04.313919 5119 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483224-q527q"] Jan 21 11:10:04 crc kubenswrapper[5119]: I0121 11:10:04.321476 5119 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483224-q527q"] Jan 21 11:10:04 crc kubenswrapper[5119]: I0121 11:10:04.602721 5119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676b1f57-46ad-4f31-b5f4-fc3590b0458d" path="/var/lib/kubelet/pods/676b1f57-46ad-4f31-b5f4-fc3590b0458d/volumes"